繁体   English   中英

由于 cni 缺失,Kubelet Master 留在 KubeletNotReady

[英]Kubelet Master stays in KubeletNotReady because of cni missing

Kubelet 已经用 pod 网络为 Calico 初始化:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --image-repository=someserver

然后我得到 calico.yaml v3.11 并应用它:

sudo kubectl --kubeconfig="/etc/kubernetes/admin.conf" apply -f calico.yaml

在我检查 pod 状态之后:

sudo kubectl --kubeconfig="/etc/kubernetes/admin.conf" get nodes
NAME              STATUS     ROLES    AGE     VERSION
master-1   NotReady   master   7m21s   v1.17.2

在描述中,我已经对 cni 配置进行了统一化,但我认为 calico 应该这样做?

MemoryPressure   False   Fri, 21 Feb 2020 10:14:24 +0100   Fri, 21 Feb 2020 10:09:00 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 21 Feb 2020 10:14:24 +0100   Fri, 21 Feb 2020 10:09:00 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 21 Feb 2020 10:14:24 +0100   Fri, 21 Feb 2020 10:09:00 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Fri, 21 Feb 2020 10:14:24 +0100   Fri, 21 Feb 2020 10:09:00 +0100   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

事实上,我在 /etc/cni/net.d/ 下什么都没有,所以它似乎忘记了什么?

ll /etc/cni/net.d/
total 0
sudo kubectl --kubeconfig="/etc/kubernetes/admin.conf" -n kube-system get pods
    NAME                                       READY   STATUS                  RESTARTS   AGE
calico-kube-controllers-5644fb7cf6-f7lqq   0/1     Pending                 0          3h
calico-node-f4xzh                          0/1     Init:ImagePullBackOff   0          3h
coredns-7fb8cdf968-bbqbz                   0/1     Pending                 0          3h24m
coredns-7fb8cdf968-vdnzx                   0/1     Pending                 0          3h24m
etcd-master-1                       1/1     Running                 0          3h24m
kube-apiserver-master-1            1/1     Running                 0          3h24m
kube-controller-manager-master-1    1/1     Running                 0          3h24m
kube-proxy-9m879                           1/1     Running                 0          3h24m
kube-scheduler-master-1             1/1     Running                 0          3h24m

正如所解释的,我正在浏览一个本地仓库,而 journalctl 说:

 kubelet[21935]: E0225 14:30:54.830683   21935 pod_workers.go:191] Error syncing pod cec2f72b-844a-4d6b-8606-3aff06d4a36d ("calico-node-f4xzh_kube-system(cec2f72b-844a-4d6b-8606-3aff06d4a36d)"), skipping: failed to "StartContainer" for "upgrade-ipam" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://repo:10000/v2/calico/cni/manifests/v3.11.2: no basic auth credentials"
 kubelet[21935]: E0225 14:30:56.008989   21935 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

感觉不仅仅是CNI的问题

核心 DNS pod 将处于挂起状态,并且 master 将处于 NotReady 状态,直到 calico pod 成功运行并且 CNI 设置正确。

从 docker.io 下载 calico docker 图像似乎是网络问题。 因此,您可以从 docker.io 中提取 calico 图像并将其推送到您的内部容器注册表,然后修改 calico yaml 以在 calico.yaml 的图像部分中引用该注册表,最后将修改后的 calico yaml 应用于 kubernetes 集群。

所以 Init:ImagePullBackOff 的问题是它不能自动从我的私人仓库应用图像。 我不得不从 docker 中为 calico 拉取所有图像。 然后我删除了它用新推送的图像重新创建的 calico pod

sudo docker pull private-repo/calico/pod2daemon-flexvol:v3.11.2
sudo docker pull private-repo/calico/node:v3.11.2
sudo docker pull private-repo/calico/cni:v3.11.2
sudo docker pull private-repo/calico/kube-controllers:v3.11.2

sudo kubectl -n kube-system delete po/calico-node-y7g5

之后节点重新执行所有初始化阶段并:

sudo kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5644fb7cf6-qkf47   1/1     Running   0          11s
calico-node-mkcsr                          1/1     Running   0          21m
coredns-7fb8cdf968-bgqvj                   1/1     Running   0          37m
coredns-7fb8cdf968-v85jx                   1/1     Running   0          37m
etcd-lin-1k8w1dv-vmh                       1/1     Running   0          38m
kube-apiserver-lin-1k8w1dv-vmh             1/1     Running   0          38m
kube-controller-manager-lin-1k8w1dv-vmh    1/1     Running   0          38m
kube-proxy-9hkns                           1/1     Running   0          37m
kube-scheduler-lin-1k8w1dv-vmh             1/1     Running   0          38m

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM