简体   繁体   中英

EKS: Unhealthy nodes in the kubernetes cluster

I'm getting an error when using terraform to provision node group on AWS EKS. Error: error waiting for EKS Node Group (xxx) creation: NodeCreationFailure: Unhealthy nodes in the kubernetes cluster.

And I went to console and inspected the node. There is a message “runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker network plugin is not ready: cni config uninitialized” .

I have 5 private subnets and connect to Internet via NAT.

Is someone able to give me some hint on how to debug this?

Here are some details on my env.

Kubernetes version: 1.18
Platform version: eks.3
AMI type: AL2_x86_64
AMI release version: 1.18.9-20201211
Instance types: m5.xlarge

There are three workloads set up in the cluster.

coredns, STATUS (2 Desired, 0 Available, 0 Ready)
aws-node STATUS (5 Desired, 5 Scheduled, 0 Available, 0 Ready) 
kube-proxy STATUS (5 Desired, 5 Scheduled, 5 Available, 5 Ready)

go inside the coredns , both pods are in pending state, and conditions has “Available=False, Deployment does not have minimum availability” and “Progress=False, ReplicaSet xxx has timed out progressing” go inside the one of the pod in aws-node , the status shows “Waiting - CrashLoopBackOff”

Add pod network add-on

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM