简体   繁体   English

具有 ipv6 地址的容器在 k8s/calico 环境中无法连接到外部

[英]containers with ipv6 addresses can't connect to outside in k8s/calico environment

I am trying to test ipv6 connectivity in k8s environment, and installed calico network plugin;我正在尝试在 k8s 环境中测试 ipv6 连接,并安装了 calico 网络插件; the issue is that the container can't ping to the ipv6 gateway or other addresses of the cluster nodes, the version of k8s and calico is v1.18.2 and calico v1.12(also tried v1.13);问题是容器无法ping到ipv6网关或集群节点的其他地址,k8s和calico的版本是v1.18.2和calico v1.12(也试过v1.13); the configurations as followings:配置如下:

centos7, kernel is 4.4(upgraded) centos7、kernel为4.4(升级)
opened ipv6 forwarding开通ipv6转发
net.ipv6.conf.all.forwarding = 1 net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.default.forwarding = 1 net.ipv6.conf.default.forwarding = 1

calico config:印花布配置:

[root@k8s-master-01 ~]# calicoctl get ipp -owide
NAME                  CIDR            NAT    IPIPMODE   VXLANMODE   DISABLED   SELECTOR   
default-ipv4-ippool   10.244.0.0/16   true   Never      Never       false      all()      
default-ipv6-ippool   fc00:f00::/24   true   Never      Never       false      all()      

within the pod, can see ipv6 address is allocated
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440
        inet 10.244.36.196  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::a8c6:c1ff:fe61:258c  prefixlen 64  scopeid 0x20<link>
        inet6 fc00:fd8:4bce:9a48:4ab7:a333:5ec8:c684  prefixlen 128  scopeid 0x0<global>
        ether aa:c6:c1:61:25:8c  txqueuelen 0  (Ethernet)
        RX packets 23026  bytes 3522721 (3.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 24249  bytes 3598501 (3.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@k8s-worker-01 ~]# ip -6 route show
fc00:fd8:4bce:9a48:4ab7:a333:5ec8:c684 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::ecee:eeff:feee:eeee dev eth0 metric 1024 pref medium

actually, I captured the messages with tcpdump from the host, and can see some icmp requests came in to the like cali66e9f9aafee interface, but looks no furthur processing, I checked ip6tables and saw that no any packages came into the right CHAIN of masqurade实际上,我用 tcpdump 从主机捕获了消息,并且可以看到一些 icmp 请求进入了 like cali66e9f9aafee 接口,但看起来没有进一步处理,我检查了 ip6tables 并看到没有任何包进入 masqurade 的正确 CHAIN

[root@k8s-worker-01 ~]# ip6tables -t nat -vnL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    1    80 cali-PREROUTING  all      *      *       ::/0                 ::/0                 /* cali:6gwbT8clXdHdC1b1 */

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 791 packets, 63280 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  796 63680 cali-OUTPUT  all      *      *       ::/0                 ::/0                 /* cali:tVnHkvAo15HuiPy0 */

Chain POSTROUTING (policy ACCEPT 791 packets, 63280 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  796 63680 cali-POSTROUTING  all      *      *       ::/0                 ::/0                 /* cali:O3lYWMrLQYEMJtB5 */

Chain cali-OUTPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  796 63680 cali-fip-dnat  all      *      *       ::/0                 ::/0                 /* cali:GBTAv2p5CwevEyJm */

Chain cali-POSTROUTING (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  796 63680 cali-fip-snat  all      *      *       ::/0                 ::/0                 /* cali:Z-c7XtVd2Bq7s_hA */
  796 63680 cali-nat-outgoing  all      *      *       ::/0                 ::/0                 /* cali:nYKhEzDlr11Jccal */

Chain cali-PREROUTING (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    1    80 cali-fip-dnat  all      *      *       ::/0                 ::/0                 /* cali:r6XmIziWUJsdOK6Z */

Chain cali-fip-dnat (2 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain cali-fip-snat (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain cali-nat-outgoing (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MASQUERADE  all      *      *       ::/0                 ::/0                 /* cali:Ir_z2t1P6-CxTDof */ match-set cali60masq-ipam-pools src ! match-set cali60all-ipam-pools dst

tried lots of times, but failed, did i missed something?尝试了很多次,但都失败了,我错过了什么吗?

regards问候

Enabling ipv6 on your cluster isn't as simple as you did.在您的集群上启用 ipv6 并不像您做的那么简单。 Just configuring ipv6 in your network isn't going to work with Kubernetes.仅在您的网络中配置 ipv6 不适用于 Kubernetes。

First and most important topic in this mater is that IPv4/IPv6 dual-stack is an alpha feature.本课程的第一个也是最重要的主题是IPv4/IPv6 双协议栈是一个 alpha 特性。 As any alpha feature it may not work as expected.作为任何 alpha 功能,它可能无法按预期工作。

Please go through this document to understand better the requisites to make it work in your cluster (you have to add a feature-gate).请通过文档 go 更好地了解使其在您的集群中工作的必要条件(您必须添加功能门)。

There is also some work being done to make it possible to bootstrap a Kubernetes cluster with Dual Stack using Kubeadm , but it's not usable yet and there is no ETA for it.还有一些工作正在做以使使用 Kubeadm 引导具有双堆栈的 Kubernetes 集群成为可能,但它尚不可用,并且没有 ETA。

There are some examples of IPv6 and dual-stack setups with other networking plugins in this repository .此存储库中有一些 IPv6 和双栈设置与其他网络插件的示例。

This project serves two primary purposes: (i) study and validate ipv6 support in kubernetes and associated plugins (ii) provide a dev environment for implementing and testing additional functionality (egdual-stack)该项目有两个主要目的:(i)研究和验证 kubernetes 和相关插件中的 ipv6 支持(ii)为实现和测试附加功能(例如双栈)提供开发环境

I had exactily the same issue with a similar CentOS7 setup.我在使用类似的 CentOS7 设置时遇到了完全相同的问题。

Besides following the instruction on the calico website and securing that all nodes had ipv6 forwarding enabled the solution was setting the environment variable CALICO_IPV6POOL_NAT_OUTGOING to true for the install-cni in the initContainers section and for the calico-node in the containers section.除了遵循calico 网站上的说明并确保所有节点都启用了 ipv6 转发之外,解决方案是将 initContainers 部分中的install-cnicontainers部分中的initContainers calico-node的环境变量CALICO_IPV6POOL_NAT_OUTGOING设置为true

In my case I also had to set the IP_AUTODETECTION_METHOD to my actual interface with the public v6 IP address.在我的情况下,我还必须将IP_AUTODETECTION_METHOD设置为具有公共 v6 IP 地址的实际接口。

I also explicitly added --proxy-mode=iptables to the kube-proxy parameters (which I'm not sure if it is default).我还明确地将--proxy-mode=iptables添加到 kube-proxy 参数中(我不确定它是否为默认值)。

I hope this helps.我希望这有帮助。

thanks for your comments, i found that the root cause is because calico delete the route to container automatically right after about 15 second when the route was created, like below: [2020-06-20T22:12:21.292676] ff00::/8 dev caliad9673f27e9 table local metric 256 pref medium [2020-06-20T22:12:21.292723] fe80::/64 dev caliad9673f27e9 proto kernel metric 256 pref medium [2020-06-20T22:12:21.292736] 10.244.36.212 dev caliad9673f27e9 scope link [2020-06-20T22:12:21.292746] fc00:f00:0:24fe:200:8fa7:f4c7:af14 dev caliad9673f27e9 metric 1024 pref medium [2020-06-20T22:12:23.173297] local fe80::ecee:eeff:feee:eeee dev lo table local proto unspec metric 0 pref medium [2020-06-20T22:12:23.173376] local fe80:: dev lo table local proto unspec metric 0 pref medium [2020-06-20T22:12:31.734619] Deleted fc00:f00:0:24fe:200:8fa7:f4c7:af14 dev caliad9673f27e9 metric 1024 pref medium感谢您的评论,我发现根本原因是因为 calico 在创建路由大约 15 秒后自动删除了到容器的路由,如下所示:[2020-06-20T22:12:21.292676] ff00::/8 dev caliad9673f27e9 table local metric 256 pref medium [2020-06-20T22:12:21.292723] fe80::/64 dev caliad9673f27e9 proto kernel metric 256 pref medium [2020-06-20T22:12:21.292736] 10.244.36.212 dev caliad9673f27e9 scope link [2020-06-20T22:12:21.292746] fc00:f00:0:24fe:200:8fa7:f4c7:af14 dev caliad9673f27e9 metric 1024 pref medium [2020-06-20T22:12:23.173297] 本地 fe80::ecee:eeff :feee:eeee dev lo table local proto unspec metric 0 pref medium [2020-06-20T22:12:23.173376] local fe80:: dev lo table local proto unspec metric 0 pref medium [2020-06-20T22:12:31.734619]删除 fc00:f00:0:24fe:200:8fa7:f4c7:af14 dev caliad9673f27e9 metric 1024 pref medium

and there's an issue report in github, they found it in calico 3.9 version, and i tried this in 3.13.4, the same result.. https://github.com/projectcalico/calico/issues/2876并且在 github 中有一个问题报告,他们在 calico 3.9 版本中找到了它,我在 3.13.4 中尝试了这个,同样的结果.. https://github.com/projectcalico/calico/issues/2876

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM