简体   繁体   中英

Kubelet stopped posting node status (Kubernetes)

I am running a kubernetes cluster on EKS with two worker nodes. Both nodes are showing NotReady status and when I checked the kubelet logs on both nodes, there are below errors

k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Unauthorized k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Unauthorized

Is there anyway I can check which credentials are being used and how to fix this error?

Check the aws-auth ConfigMap whether the Role used by the node has proper permissions. Also you enable the EKS control plane logs on cloudwatch and check the authenticator logs on what Role is being denied access.

You can reset the configmap anytime with the same user/role that was used to create the cluster, even if it is not present in the configmap.

It is important that you do not delete this role/user from IAM.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM