简体   繁体   English

在iptables模式下使用kube-proxy无法从节点外部访问Kubernetes服务

[英]Can't reach Kubernetes service from outside of node when kube-proxy in iptables mode

I have a Single-Node (master+node) Kubernetes deployment running on CoreOS, with kube-proxy running in iptables mode, flannel for container networking, without Calico. 我有一个在CoreOS上运行的单节点(主节点+节点)Kubernetes部署,其中kube-proxy在iptables模式下运行,法兰绒用于容器联网,而没有Calico。

kube-proxy.yaml KUBE-proxy.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: quay.io/coreos/hyperkube:v1.5.2_coreos.0
    command:
    - /hyperkube
    - proxy
    - --master=http://127.0.0.1:8080
    - --hostname-override=10.0.0.144
    - --proxy-mode=iptables
    - --bind-address=0.0.0.0
    - --cluster-cidr=10.1.0.0/16
    - --masquerade-all=true
    securityContext:
      privileged: true

I've created a deployment, then exposed that deployment using a Service of type NodePort. 我创建了一个部署,然后使用NodePort类型的服务公开了该部署。

user@node ~ $ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \
  --labels=app=hostnames \
  --port=9376 \
  --replicas=3

user@node ~ $ kubectl expose deployment hostnames \
  --port=80 \
  --target-port=9376 \
  --type=NodePort

user@node ~ $ kubectl get svc hostnames
NAME        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
hostnames   10.1.50.64   <nodes>       80:30177/TCP   6m

I can curl successfully from the node (loopback and eth0 IP): 我可以从节点成功curl (回送和eth0 IP):

user@node ~ $ curl localhost:30177
hostnames-3799501552-xfq08

user@node ~ $ curl 10.0.0.144:30177
hostnames-3799501552-xfq08

However, I cannot curl from outside the node. 但是,我无法从节点外部curl I've tried from both a client machine outside the node's network (with correct firewall rules), and a machine inside the node's private network, with the network's firewall completely open between the two machines, with no luck. 我既尝试了节点网络外部的客户端计算机(具有正确的防火墙规则),又尝试了节点专用网络内部的计算机,网络防火墙完全在两台计算机之间打开,但是没有运气。

I'm fairly confident that it's an iptables/kube-proxy issue, because if I modify the kube-proxy config from --proxy-mode=iptables to --proxy-mode=userspace I can access from both external machines. 我相当有信心这是一个iptables / kube-proxy问题,因为如果我将kube-proxy配置从--proxy-mode=iptables--proxy-mode=userspace ,则可以从两台外部计算机上进行访问。 Also, if I bypass kubernetes and run a docker container I have no problems with external access. 另外,如果我绕过kubernetes并运行docker容器,则外部访问没有问题。

Here are the current iptables rules: 这是当前的iptables规则:

user@node ~ $ iptables-save
# Generated by iptables-save v1.4.21 on Mon Feb  6 04:46:02 2017
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-4IIYBTTZSUAZV53G - [0:0]
:KUBE-SEP-4TMFMGA4TTORJ5E4 - [0:0]
:KUBE-SEP-DUUUKFKBBSQSAJB2 - [0:0]
:KUBE-SEP-XONOXX2F6J6VHAVB - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-NWV5X2332I4OT4T3 - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 10.1.0.0/16 -d 10.1.0.0/16 -j RETURN
-A POSTROUTING -s 10.1.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.1.0.0/16 -d 10.1.0.0/16 -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/hostnames:" -m tcp --dport 30177 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/hostnames:" -m tcp --dport 30177 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-4IIYBTTZSUAZV53G -s 10.0.0.144/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-4IIYBTTZSUAZV53G -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-4IIYBTTZSUAZV53G --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.0.0.144:6443
-A KUBE-SEP-4TMFMGA4TTORJ5E4 -s 10.1.34.2/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-4TMFMGA4TTORJ5E4 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.2:9376
-A KUBE-SEP-DUUUKFKBBSQSAJB2 -s 10.1.34.3/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-DUUUKFKBBSQSAJB2 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.3:9376
-A KUBE-SEP-XONOXX2F6J6VHAVB -s 10.1.34.4/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-XONOXX2F6J6VHAVB -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.1.34.4:9376
-A KUBE-SERVICES -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES ! -s 10.1.0.0/16 -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.1.50.64/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-SERVICES -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES ! -s 10.1.0.0/16 -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.1.50.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-4IIYBTTZSUAZV53G --mask 255.255.255.255 --rsource -j KUBE-SEP-4IIYBTTZSUAZV53G
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-4IIYBTTZSUAZV53G
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-4TMFMGA4TTORJ5E4
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-DUUUKFKBBSQSAJB2
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-XONOXX2F6J6VHAVB
COMMIT
# Completed on Mon Feb  6 04:46:02 2017
# Generated by iptables-save v1.4.21 on Mon Feb  6 04:46:02 2017
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [67:14455]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth0 -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 11 -j ACCEPT
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
# Completed on Mon Feb  6 04:46:02 2017

I'm not sure what to look for in the rules... Can someone with more experience than myself make some suggestions on troubleshooting? 我不确定要在规则中查找什么...比我自己经验更多的人可以提出一些有关故障排除的建议吗?

Fixed it. 修复。 The problem was that I had some default iptables rules applied on startup which must override some parts of the dynamic rule-set created by kube-proxy . 问题是我在启动时应用了一些默认的iptables规则,这些规则必须覆盖kube-proxy创建的动态规则集的某些部分。

The difference between working and non-working was as follows: 工作与不工作之间的区别如下:

Working 工作

:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [67:14455]

...

-A INPUT -i lo -j ACCEPT
-A INPUT -i eth0 -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 11 -j ACCEPT

...

Not working 不工作

:INPUT ACCEPT [30:5876]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [25:5616]

...

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM