简体   繁体   中英

Kubernetes CNI vs Kube-proxy

I'm not sure what the difference is between the CNI plugin and the Kube-proxy in Kube.netes. From what I get out of the documentation I conclude the following:

Kube-proxy is responsible for communicating with the master node and routing.

CNI provides connectivity by assigning IP addresses to pods and services, and reachability through its routing deamon.

the routing seems to be an overlapping function between the two, is that true?

Kind regards, Charles

OVERLAY NETWORK

Kubernetes assumes that every pod has an IP address and that you can communicate with services inside that pod by using that IP address. When I say “overlay network” this is what I mean (“the system that lets you refer to a pod by its IP address”).

All other Kubernetes networking stuff relies on the overlay networking working correctly.

There are a lot of overlay network backends (calico, flannel, weave) and the landscape is pretty confusing. But as far as I'm concerned an overlay network has 2 responsibilities:

  1. Make sure your pods can send network requests outside your cluster
  2. Keep a stable mapping of nodes to subnets and keep every node in your cluster updated with that mapping. Do the right thing when nodes are added & removed.

KUBE-PROXY

Just to understand kube-proxy, Here's how Kubernetes services work! A service is a collection of pods, which each have their own IP address (like 10.1.0.3, 10.2.3.5, 10.3.5.6)

  1. Every Kubernetes service gets an IP address (like 10.23.1.2)
  2. kube-dns resolves Kubernetes service DNS names to IP addresses (so my-svc.my-namespace.svc.cluster.local might map to 10.23.1.2)
  3. kube-proxy sets up iptables rules in order to do random load balancing between them.

So when you make a request to my-svc.my-namespace.svc.cluster.local, it resolves to 10.23.1.2, and then iptables rules on your local host (generated by kube-proxy) redirect it to one of 10.1.0.3 or 10.2.3.5 or 10.3.5.6 at random.

In short, overlay networks define the underlying network which can be used for communicating the various component of kubernetes. While kube-proxy is a tool to generate the IP tables magic which let you connect to any of the pod(using servics) in kubernetes no matter on which node that pod exist.

Parts of this answer were taken from this blog:

https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/

Hope this gives you brief idea about kubernetes networking.

There are two kinds of IP in kubernetes: ClusterIP and Pod IP.

CNI

CNI cares about Pod IP.

CNI Plugin is focusing on building up an overlay network, without which Pods can't communicate with each other. The task of the CNI plugin is to assign Pod IP to the Pod when it's scheduled, and to build a virtual device for this IP, and make this IP accessable from every node of the cluster.

In Calico, this is implement by N host routes (N=the number of cali veth device) and M direct routes on tun0 (M=the number of K8s cluster nodes).

$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.130.29.1     0.0.0.0         UG    100    0        0 ens32
10.130.29.0     0.0.0.0         255.255.255.0   U     100    0        0 ens32
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 *
10.244.0.137    0.0.0.0         255.255.255.255 UH    0      0        0 calid3c6b0469a6
10.244.0.138    0.0.0.0         255.255.255.255 UH    0      0        0 calidbc2311f514
10.244.0.140    0.0.0.0         255.255.255.255 UH    0      0        0 califb4eac25ec6
10.244.1.0      10.130.29.81    255.255.255.0   UG    0      0        0 tunl0
10.244.2.0      10.130.29.82    255.255.255.0   UG    0      0        0 tunl0

In this case, 10.244.0.0/16 is the Pod IP CIDR, and 10.130.29.81 is a node in the cluster. You can imagine, if you have a TCP request to 10.244.1.141 , it will be sent to 10.130.29.81 following the 7th rule. And on 10.130.29.81 , there will be a route rule like this:

10.244.1.141    0.0.0.0         255.255.255.255 UH    0      0        0 cali4eac25ec62b

This will finally send the request to the correct Pod.

I'm not sure why a daemon is nessesary, I guess daemoned is to prevent the route rules it created from being deleted manually.

kube-proxy

kube-proxy's job is rather simple, it just redirect requests from Cluster IP to Pod IP.

kube-proxy has two mode, IPVS and iptables . If your kube-proxy is working on IPVS mode, you can see the redirect rules created by kube-proxy by running the following command on any node in the cluster:

ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 10.130.29.80:6443            Masq    1      6          0         
  -> 10.130.29.81:6443            Masq    1      1          0         
  -> 10.130.29.82:6443            Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.137:53              Masq    1      0          0         
  -> 10.244.0.138:53              Masq    1      0          0   
...

In this case, you can see the default Cluster IP of CoreDNS 10.96.0.10 , and behind it is two real server with Pod IP: 10.244.0.137 and 10.244.0.138 .

This rule is what kube-proxy to create, and it's what kube-proxy created.

PS iptables mode is almost the same, but iptables rules looks ugly. I don't want to paste it here. :p

my 2 cents, correct me if not accurate

Kube-proxy control the K8s.network communication,and the.network is based on CNI plugin.

CNI plugin implement the CNI

CNI is overlay.network for simplifing.network communication

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM