简体   繁体   中英

All pods get restarted while cluster auto upgrade is disabled

The cluster disabled auto upgrade. While this morning (Oct. 10th, 2018, around 10:00am PST), nearly all pods restarted. Any possibility that it's relevant with K8s or GKE new version release? If not, what's the reason for the restarts?

  • When you are disabling auto upgrade ensure that disabling this feature(auto upgrade) does not halt in-progress upgrades.
  • Did you disable auto-upgrade on all node pools in the cluster & did you check whether any upgrade is in progress?

Without looking at the pod events and other logs, and by assuming the nodes restarted as well, I can speculate an emergency maintenance operation on the physical elements of the datacenter in which the nodes were running. This can be confirmed from Operations History and Stackdriver logs etc.

Also, look for node's exhausted resources (OOM killer) which cause pods to restart.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM