简体   繁体   English

“nslookup:读取:连接被拒绝”来自 Kubernetes (K8S) 集群中的 pod 内部(DNS 问题)

[英]“nslookup: read: Connection refused” from inside of a pod in Kubernetes (K8S) cluster (DNS problem)

Problem问题

I have custom installation of k8s cluster with 1 master and 1 node on AWS ec2 based on Centos 7. It uses Core-DNS (pods running fine with no errors in logs) Inside of a node pod when calling eg nslookup google.com the output is nslookup: write to '10.96.0.10': Connection refused;; connection timed out; no servers could be reached I have custom installation of k8s cluster with 1 master and 1 node on AWS ec2 based on Centos 7. It uses Core-DNS (pods running fine with no errors in logs) Inside of a node pod when calling eg nslookup google.com the output是nslookup: write to '10.96.0.10': Connection refused;; connection timed out; no servers could be reached nslookup: write to '10.96.0.10': Connection refused;; connection timed out; no servers could be reached

For example, pinging inside of a pod ping 8.8.8.8 works fine:例如,在 pod ping 8.8.8.8内部 ping 可以正常工作:

PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=50 time=1.330 ms

/etc/resolv.conf inside a pod it looks like: /etc/resolv.conf在一个 pod 中,它看起来像:

nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal
options ndots:5

This command works fine from the node itself nslookup google.com :此命令在节点本身nslookup google.com

Server:         172.31.0.2
Address:        172.31.0.2#53

Non-authoritative answer:
Name:   google.com
Address: 172.217.15.110
Name:   google.com
Address: 2607:f8b0:4004:801::200e

Kubelet config kubectl get configmap kubelet-config-1.17 -n kube-system -o yaml returns Kubelet config kubectl get configmap kubelet-config-1.17 -n kube-system -o yaml返回

data:
  kubelet: |
    apiVersion: kubelet.config.k8s.io/v1beta1
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 0s
        enabled: true
      x509:
        clientCAFile: /etc/kubernetes/pki/ca.crt
    authorization:
      mode: Webhook
      webhook:
        cacheAuthorizedTTL: 0s
        cacheUnauthorizedTTL: 0s
    clusterDNS:
    - 10.96.0.10
    clusterDomain: cluster.local
    cpuManagerReconcilePeriod: 0s
    evictionPressureTransitionPeriod: 0s
    fileCheckFrequency: 0s
    healthzBindAddress: 127.0.0.1
    healthzPort: 10248
    httpCheckFrequency: 0s
    imageMinimumGCAge: 0s
    kind: KubeletConfiguration
    nodeStatusReportFrequency: 0s
    nodeStatusUpdateFrequency: 0s
    rotateCertificates: true
    runtimeRequestTimeout: 0s
    staticPodPath: /etc/kubernetes/manifests
    streamingConnectionIdleTimeout: 0s
    syncFrequency: 0s
    volumeStatsAggPeriod: 0s
kind: ConfigMap

Pods in kube namespace kubectl get pods -n kube-system look like this: kube 命名空间kubectl get pods -n kube-system中的 Pod 如下所示:

coredns-6955765f44-qdbgx                                1/1     Running   6          11d
coredns-6955765f44-r4v7n                                1/1     Running   6          11d
etcd-ip-172-31-42-121.ec2.internal                      1/1     Running   7          11d
kube-apiserver-ip-172-31-42-121.ec2.internal            1/1     Running   7          11d
kube-controller-manager-ip-172-31-42-121.ec2.internal   1/1     Running   6          11d
kube-proxy-lrpd9                                        1/1     Running   6          11d
kube-proxy-z55cv                                        1/1     Running   6          11d
kube-scheduler-ip-172-31-42-121.ec2.internal            1/1     Running   6          11d
weave-net-bdn5n                                         2/2     Running   0          39h
weave-net-z7mks                                         2/2     Running   5          39h

UPDATE更新

From the pod if I do ip route it returns:如果我执行ip route ,则从吊舱返回:

default via 10.32.0.1 dev eth0 
10.32.0.0/12 dev eth0 scope link  src 10.32.0.16 

From master:来自大师:

default via 172.31.32.1 dev eth0 
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
172.31.32.0/20 dev eth0 proto kernel scope link src 172.31.42.121 

From node:从节点:

default via 172.31.32.1 dev eth0 
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
172.31.32.0/20 dev eth0 proto kernel scope link src 172.31.46.62 

CoreDNS configmap kubectl -n kube-system get configmap coredns -oyaml is: CoreDNS configmap kubectl -n kube-system get configmap coredns -oyaml是:

apiVersion: v1
data:
  Corefile: |
    .:53 {
        log
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap

So why nslookup google.com doesn't work inside of a pod??那么为什么nslookup google.com在 pod 内不起作用?

Installation of k8s cluster was wrong, ansible script should contain correct private IPs of master and nodes on ec2 vms. k8s 集群安装错误,ansible 脚本应该包含正确的 ec2 虚拟机上的主节点和节点的私有 IP。

dev-kubernetes-master ansible_host=34.233.207.xxx private_ip=172.31.37.xx
dev-kubernetes-slave ansible_host=52.6.10.xxx private_ip=172.31.42.xxx

I've reinstalled cluster with correct private ips specified (before there was no private ip at all) and the problem has gone.我已经重新安装了指定正确私有 ip 的集群(在没有私有 ip 之前),问题已经消失。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 无法从 k8s pod 内部解析 dns - Can't resolve dns from inside k8s pod k8s集群中从前端到后端的连接被拒绝 - connection refused from frontend to backend in k8s cluster 从内部运行的应用程序读取k8s pod的IP - Read IP of k8s pod from application running inside it 从运行在不同 k8s 集群中的 pod 启动不同 k8s 集群中的 pod - Launching a pod in a different k8s cluster from a pod running in a different k8s cluster 无法从 k8s pod 内部解析 home dns - Can't resolve home dns from inside k8s pod K8S pod 中的 DNS:tcpdump 显示错误的 udp 校验和,但 nslookup 仍然有效并且 UDP 错误计数器不增加 - DNS in K8S pod: tcpdump shows bad udp checksum but nslookup still works and UDP error counters don't increment K8s 错误:ImagePullBackOff || 阅读:连接被拒绝 - K8s Error: ImagePullBackOff || read: connection refused 如何从 Kubernetes k8s pod 获取堆转储? - How to get a heap dump from Kubernetes k8s pod? 如何使用同一 k8s 集群中的 kubectl exec 从另一个 pod 中的一个 pod 执行命令 - How to execute command from one pod inside another pod using kubectl exec which are inside a same k8s cluster 如何配置k8s客户端,以便它可以与k8s集群Pod中的k8s CRD对话? - How to config k8s client so that it can talk to k8s CRDs from a k8s cluster pod?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM