简体   繁体   中英

kube-proxy in iptables mode is not working in routing

What I have is

  • Kubernetes: v.1.1.2
  • iptables v1.4.21
  • kernel: 3.10.0-327.3.1.el7.x86_64 Centos
  • Networking is done via flannel udp
  • no cloud provider

what I do

I have enabled it with --proxy_mode=iptables argument. And I checked the iptables

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             /* kubernetes service portals */
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             /* kubernetes service portals */
DOCKER     all  --  anywhere            !loopback/8           ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  SIDR26KUBEAPMORANGE-005/26  anywhere
MASQUERADE  all  --  172.17.0.0/16        anywhere
MASQUERADE  all  --  anywhere             anywhere             /* kubernetes service traffic requiring SNAT */ mark match 0x4d415351

Chain DOCKER (2 references)
target     prot opt source               destination

Chain KUBE-NODEPORTS (1 references)
target     prot opt source               destination

Chain KUBE-SEP-3SX6E5663KCZDTLC (1 references)
target     prot opt source               destination
MARK       all  --  172.20.10.130        anywhere             /* default/nc-service: */ MARK set 0x4d415351
DNAT       tcp  --  anywhere             anywhere             /* default/nc-service: */ tcp to:172.20.10.130:9000

Chain KUBE-SEP-Q4LJF4YJE6VUB3Y2 (1 references)
target     prot opt source               destination
MARK       all  --  SIDR26KUBEAPMORANGE-001.serviceengage.com  anywhere             /* default/kubernetes: */ MARK set 0x4d415351
DNAT       tcp  --  anywhere             anywhere             /* default/kubernetes: */ tcp to:10.62.66.254:9443

Chain KUBE-SERVICES (2 references)
target     prot opt source               destination
KUBE-SVC-6N4SJQIF3IX3FORG  tcp  --  anywhere             172.21.0.1           /* default/kubernetes: cluster IP */ tcp dpt:https
KUBE-SVC-362XK5X6TGXLXGID  tcp  --  anywhere             172.21.145.28        /* default/nc-service: cluster IP */ tcp dpt:commplex-main
KUBE-NODEPORTS  all  --  anywhere             anywhere             /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL

Chain KUBE-SVC-362XK5X6TGXLXGID (1 references)
target     prot opt source               destination
KUBE-SEP-3SX6E5663KCZDTLC  all  --  anywhere             anywhere             /* default/nc-service: */

Chain KUBE-SVC-6N4SJQIF3IX3FORG (1 references)
target     prot opt source               destination
KUBE-SEP-Q4LJF4YJE6VUB3Y2  all  --  anywhere             anywhere             /* default/kubernetes: */

When I do nc request to the service ip from another machine, in my case it's 10.116.0.2 I got an error like below nc -v 172.21.145.28 5000 Ncat: Version 6.40 ( http://nmap.org/ncat ) hello Ncat: Connection timed out.

while when I do request to the 172.20.10.130:9000 server it's working fine.

nc -v 172.20.10.130 9000 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: Connected to 172.20.10.130:9000. hello yes

From the dmesg log, I can see

[10153.318195] DBG@OUTPUT: IN= OUT=eth0 SRC=10.62.66.223 DST=172.21.145.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=5000 WINDOW=29200 RES=0x00 SYN URGP=0
[10153.318282] DBG@OUTPUT: IN= OUT=eth0 SRC=10.62.66.223 DST=172.21.145.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=5000 WINDOW=29200 RES=0x00 SYN URGP=0
[10153.318374] DBG@POSTROUTING: IN= OUT=flannel0 SRC=10.62.66.223 DST=172.20.10.130 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=9000 WINDOW=29200 RES=0x00 SYN URGP=0

And I found if I'm on the machine which the Pod is running. I can successfully to connect through service ip.

nc -v 172.21.145.28 5000
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Connected to 172.21.145.28:5000.
hello
yes

I am wondering why and how to fix it.

I meet the same issue exactly, on Kubernetes 1.1.7 and 1.2.0. I start flannel without --ip-masq, and add parameter --masquerade-all=true for kube-proxy, it helps.

根据iptables模式下的kube-proxy无法正常工作 ,您可能必须添加一条路由,将您的服务IP路由到Docker桥。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM