When setting up Calico with the Tigera operator, one crucial step is correctly defining the CIDR range. Misconfigurations here will cause pods to fail (ContainerCreating). Follow this streamlined approach carefully.
Step 1: Deploy Tigera Operator
First, install the Tigera Operator using either replace (if updating an existing deployment) or create (fresh deployment):
kubectl replace -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
then
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
Step 2: Confirm Your Kubernetes Pod CIDR
This step ensures your Calico CIDR matches the CIDR defined during your Kubernetes cluster initialization. Use:
kubectl cluster-info dump | grep -m 1 cluster-cidr
Example output (this is the correct CIDR):
"--cluster-cidr=192.168.0.0/24"
Important Note:
The above command is reliable because it directly pulls from your Kubernetes controller-manager settings.
Step 3: Download Calico Custom Resources YAML
Download the manifest specifically for custom resources (CR):
wget https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml
Step 4: Edit custom-resources.yaml (Set CIDR & VXLAN)
Open and edit this YAML:
vim custom-resources.yaml
Set your correct CIDR (matching the above step):
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
ipPools:
- blockSize: 26
cidr: 192.168.0.0/24 # <-- Set exactly as your cluster-cidr
encapsulation: VXLAN
natOutgoing: Enabled
nodeSelector: all()
Step 5: Apply the Custom Resources Configuration
Use this command (apply works here correctly because you're creating the CR):
kubectl apply -f custom-resources.yaml
Step 6: Verify Calico Pods Status
Wait about 2–3 minutes and then:
kubectl get pods -n calico-system
------------ end if some issue ----------------------
Pods should show status Running.
Step 7: Troubleshooting Calico (if Pods stuck in ContainerCreating)
Check kubelet logs, CNI plugins directory, and containerd status:
journalctl -u kubelet
ls /etc/cni/net.d/
sudo systemctl restart containerd
Check Calico node logs directly if required:
kubectl logs -n calico-system -l k8s-app=calico-node
Step 8: Disable BGP (Single-Node Clusters)
For single-node Kubernetes clusters, disable BGP to avoid related errors.
Edit your custom-resources.yaml:
spec:
calicoNetwork:
bgp: Disabled # <-- Add this line
ipPools:
- cidr: 192.168.0.0/24
encapsulation: VXLAN
natOutgoing: Enabled
Re-apply your configuration:
kubectl apply -f custom-resources.yaml
Step 9: Restart Calico Nodes (for clean initialization)
kubectl delete pods -n calico-system -l k8s-app=calico-node
This forces pods to restart with the fresh configuration.
Final Verification Commands
Ensure everything is working as expected:
kubectl get nodes -o wide
kubectl get pods -n calico-system
Quick Recap: Important File References
File or Command Purpose
custom-resources.yaml Define CIDR, VXLAN, BGP settings for Calico
kubectl cluster-info dump grep cluster-cidr`
Best Practices:
Always match the CIDR in custom-resources.yaml with your cluster’s initialized CIDR.
Disable BGP explicitly in single-node setups to prevent unnecessary errors.
Wait at least 2–3 mins after applying changes for Calico to stabilize.
Let me know if you want further improvements or specific details