简体   繁体   English

hyperkube代理,kubelet找不到iptables链,rkt运行--net = host

[英]hyperkube proxy, kubelet can't find iptables chain, rkt run --net=host

My kubelet complains: 我的抱怨抱怨:

E1201 09:00:12.562610 28747 kubelet_network.go:365] Failed to ensure rule to drop packet marked by KUBE-MARK-DROP in filter chain KUBE-FIREWALL: error appending rule: exit status 1: iptables: No chain/target/match by that name. E1201 09:00:12.562610 28747 kubelet_network.go:365]无法确保规则在KUBE-FIREWALL过滤器链中丢弃由KUBE-MARK-DROP标记的数据包:错误附加规则:退出状态1:iptables:无链/目标/匹配用这个名字。

This is usually happens when you forget to 'rkt run' with --net-host, but I have not. 这通常发生在您忘记使用--net-host'rkt run'时,但我没有。

export RKT_OPTS="--volume var-log,kind=host,source=/var/log \\ export RKT_OPTS =“ - volume var-log,kind = host,source = / var / log \\
--mount volume=var-log,target=/var/log \\ --volume dns,kind=host,source=/etc/resolv.conf \\ --mount volume=dns,target=/etc/resolv.conf --net=host" --mount volume = var-log,target = / var / log \\ --volume dns,kind = host,source = / etc / resolv.conf \\ --mount volume = dns,target = / etc / resolv.conf - -net =主机”

The following confirms my kube-proxy (started by kubelet) is in the same namespace as the host that owns the iptables chains: 以下确认我的kube-proxy(由kubelet启动)与拥有iptables链的主机位于同一名称空间中:

root@i8:/etc# d exec -it 738 readlink /proc/self/ns/net
net:[4026531963]

root@i8:/etc# readlink /proc/self/ns/net
net:[4026531963]

root@i8:/etc# docker ps
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS                           NAMES
738ed14ec802        quay.io/coreos/hyperkube:v1.4.6_coreos.0   "/hyperkube proxy --m"   44 minutes ago      Up 44 minutes                                       k8s_kube-proxy.c445d412_kube-proxy-192.168.101.128_kube-system_438e3d01f328e73a199c6c0ed1f92053_10197c34

The proxy similarly complains "No chain/target/match by that name". 代理同样抱怨“没有链/目标/匹配该名称”。

I have also verified the iptables chain: 我还验证了iptables链:

# Completed on Thu Dec  1 01:07:11 2016
# Generated by iptables-save v1.4.21 on Thu Dec  1 01:07:11 2016
*filter
:INPUT ACCEPT [4852:1419411]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [5612:5965118]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -j DROP
COMMIT

This satisfies the complaint in the error message (I think) and matches the filter chain on a problem-free coreos worker (different machine I compared with). 这满足了错误消息(我认为)中的抱怨,并匹配无问题的coreos工作者(我与之比较的不同机器)上的过滤器链。

The problem worker is Debian Jessie running docker 1.12.3 and rkt 1.18.0 问题工作者是Debian Jessie运行docker 1.12.3和rkt 1.18.0

Both the good worker and problem worker are running the same version of iptables, 1.4.21 好工人和问题工作者都运行相同版本的iptables,1.4.21

KUBELET_VERSION=v1.4.6_coreos.0 KUBELET_VERSION = v1.4.6_coreos.0

The symptom is that kubernetes on the problem worker does not install any iptables rules, like KUBE-NODEPORTS, and so this worker cannot listen for NodePort services. 症状是问题工作者的kubernetes没有安装任何iptables规则,比如KUBE-NODEPORTS,因此这个worker无法监听NodePort服务。 I think it's because of the above. 我认为这是因为上述原因。

The problem worker has no problem running pods that the Master Node schedules. 问题工作者运行主节点计划的pod没有问题。

Pods on the problem worker are serving requests OK from a proxy running on a different (coreos) worker. 问题工作者上的窗格正在从运行在不同(coreos)工作线程上的代理服务请求。

I'm using flannel for networking. 我正在使用法兰绒进行联网。

If anyone was wondering, I need to get kubernetes working on Debian (yeah, it's a long story) 如果有人想知道,我需要让kubernetes在Debian上工作(是的,这是一个很长的故事)

What else can I do to isolate what seems to be kubelet not seeing the host's iptables? 我还能做些什么来隔离似乎没有看到主机的iptables的kubelet?

After much fault isolation, I've found the cause and solution. 在多次故障隔离后,我找到了原因和解决方案。

In my case, I'm running a custom kernel pkg (linux-image), which was missing several kernel modules related to iptables. 就我而言,我正在运行一个自定义内核pkg(linux-image),它缺少与iptables相关的几个内核模块。 So when kubelet tried to append iptables rules that contained a comment, it errored because xt_comment wasn't loaded. 因此,当kubelet尝试附加包含注释的iptables规则时,它会出错,因为未加载xt_comment。

These are the modules I was missing: ipt_REJECT, nf_conntrack_netlink, nf_reject_ipv4, sch_fq_codel (maybe not required), xt_comment, xt_mark, xt_recent, xt_statistic 这些是我缺少的模块:ipt_REJECT,nf_conntrack_netlink,nf_reject_ipv4,sch_fq_codel(可能不需要),xt_comment,xt_mark,xt_recent,xt_statistic

To get a complete list of modules that I likely needed, I logged into a CoreOS kubernetes worker and looked at its lsmod . 为了获得我可能需要的完整模块列表,我登录了一个CoreOS kubernetes工作者并查看了它的lsmod Then just compared that list to my "problem" machine. 然后将该列表与我的“问题”机器进行比较。

I had this issue on a gentoo box with a custom kernel configuration whilst running k8s using rancher's k3d 1.3.1. 我在使用牧场主的k3d 1.3.1运行k8s时使用自定义内核配置的gentoo盒子上遇到了这个问题。 Rebuilding the kernel with all the sane iptables + xt_comment solved this issue for me. 用所有理智的iptables + xt_comment重建内核为我解决了这个问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM