简体   繁体   中英

Understanding Kubernetes networking, pods with same ip

I checked the pods in the kube-system namespace and noticed that some pods share the same ip address.The pods that share the same ip address appear to be on the same node.

( I'm confused as to how same ip for some pods came about.

This was reported in issue 51322 and can depend on the network plugin you are using.

The issue was seen when using the basic kubenet network plugin on Linux.

Sometime, a reset/reboot can help

I suspect nodes have been configured with overlapped podCIDRs for such cases.
The pod CIDR could be checked by kubectl get node -o jsonpath='{.items[*].spec.podCIDR}'

Please check the Kubernetes manifests of the pods that have the same IP address as their node. If they have the parameter 'hostNetwork' set to be true, then this is not an issue.

Yes. I have checked my 2 node clusters created using kubeadm on VMs running on AWS. In the manifest files for static Pods hostNetwork=true is set. Pods are:

  • -rw------- 1 root root 2100 Feb 4 16:48 etcd.yaml

  • -rw------- 1 root root 3669 Feb 4 16:48 kube-apiserver.yaml

  • -rw------- 1 root root 3346 Feb 4 16:48 kube-controller-manager.yaml

  • -rw------- 1 root root 1385 Feb 4 16:48 kube-scheduler.yaml

  • I have checked with weave and flannel.

  • All other pods getting IP, which was set during cluster initialization by kubeadm:

kubeadm init --pod-network-cidr=10.244.0.0/16

ubuntu@master-node:~$ kubectl get all -o wide --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default pod/my-nginx-deployment-5976fbfd94-2n2ff 1/1 Running 0 20m 10.244.1.17 worker-node01 default pod/my-nginx-deployment-5976fbfd94-4sghq 1/1 Running 0 20m 10.244.1.12 worker-node01 default pod/my-nginx-deployment-5976fbfd94-57lfp 1/1 Running 0 20m 10.244.1.14 worker-node01 default pod/my-nginx-deployment-5976fbfd94-77nrr 1/1 Running 0 20m 10.244.1.18 worker-node01 default pod/my-nginx-deployment-5976fbfd94-m7qbn 1/1 Running 0 20m 10.244.1.15 worker-node01 default pod/my-nginx-deployment-5976fbfd94-nsxvm 1/1 Running 0 20m 10.244.1.19 worker-node01 default pod/my-nginx-deployment-5976fbfd94-r5hr6 1/1 Running 0 20m 10.244.1.16 worker-node01 default pod/my-nginx-deployment-5976fbfd94-whtcg 1/1 Running 0 20m 10.244.1.13 worker-node01 kube-system pod/coredns-f9fd979d6-nghhz 1/1 Running 0 63m 10.244.0.3 master-node kube-system pod/coredns-f9fd979d6-pdbrx 1/1 Running 0 63m 10.244.0.2 master-node kube-system po d/etcd-master-node 1/1 Running 0 63m 172.31.8.115 master-node kube-system pod/kube-apiserver-master-node 1/1 Running 0 63m 172.31.8.115 master-node kube-system pod/kube-controller-manager-master-node 1/1 Running 0 63m 172.31.8.115 master-node kube-system pod/kube-proxy-8k9s4 1/1 Running 0 63m 172.31.8.115 master-node kube-system pod/kube-proxy-ln6gb 1/1 Running 0 37m 172.31.3.75 worker-node01 kube-system pod/kube-scheduler-master-node 1/1 Running 0 63m 172.31.8.115 master-node kube-system pod/weave-net-jc92w 2/2 Running 1 24m 172.31.8.115 master-node kube-system pod/weave-net-l9rg2 2/2 Running 1 24m 172.31.3.75 worker-node01

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default service/kubernetes ClusterIP 10.96.0.1 443/TCP 63m kube-system service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 63m k8s-app=kube-dns

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR kube-system daemonset.apps/kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 63m kube-proxy k8s.gcr.io/kube-proxy:v1.19.16 k8s-app=kube-proxy kube-system daemonset.apps/weave-net 2 2 2 2 2 24m weave,weave-npc ghcr.io/weaveworks/launcher/weave-kube:2.8.1,ghcr.io/weaveworks/launcher/weave-npc:2.8.1 name=weave-net

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR default deployment.apps/my-nginx-deployment 8/8 8 8 20m nginx nginx app=my-nginx-deployment kube-system deployment.apps/coredns 2/2 2 2 63m coredns k8s.gcr.io/coredns:1.7.0 k8s-app=kube-dns

NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR default replicaset.apps/my-nginx-deployment-5976fbfd94 8 8 8 20m nginx nginx app=my-nginx-deployment,pod-template-hash=5976fbfd94 kube-system replicaset.apps/coredns-f9fd979d6 2 2 2 63m coredns k8s.gcr.io/coredns:1.7.0 k8s-app=kube-dns,pod-template-hash=f9fd979d6 ubuntu@master-node:~$

I will add another worker node and check.

Note: I was testing with a one master and 3 worker node cluster, where pods were getting IP from some other CIDR 10.38 and 10.39. I am not sure, but the way steps are followed matters. I could not fix that cluster.

master-node after logging in using PuTTY<\/a>

So it depends on the network plug-in. And some cases will override the pod specification CIDR provided during initialization.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM