简体   繁体   中英

Kubernetes pods are stuck after scale up AWS. Multi-Attach error for volume

I experiencing some issues when scale down/up ec2 of my k8s cluster. It might happen that sometimes I have new nodes, and old are terminated. k8s version is 1.22

Sometimes some pods are in ContainerCreating state. I am trying to describe pod and see something like this:

Warning FailedAttachVolume 29m attachdetach-controller Multi-Attach error for volume
Warning FailedMount 33s (x13 over 27m) kubelet....

I am checking that pv exists, pvs exists as well. However on pvc I see annotation volume.kube.netes.io/selected-node and its value refers to the node that already not exist.

When I am editing the pvc and deleting this annotation, everything continue to work. Another thing that It happens not always, I don't understand why.

I tried to search information, found some couple of links

https://github.com/kube.netes/kube.netes/issues/100485 and https://github.com/kube.netes/kube.netes/issues/89953 however I am not sure that I properly understand this.

Could you please helm me out with this.

Well, as you found out in volume.kube.netes.io/selected-node never cleared for non-existent nodes on PVC without PVs #100485 - this is a known issue, with no available fix yet.

Until the issue is fixed, as a workaroud, you need to remove volume.kube.netes.io/selected-node annotation manually.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM