I'm having an issue with what I believe to be the k8s the autoscaler.
The autoscaler launched a new cluster after a recent deploy (and I can see that instance on EC2, where our k8s deployment's hosted), but it doesn't show up when I do kubectl get nodes.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-20-110-212.ec2.internal Ready master 322d v1.5.1
ip-172-20-129-59.ec2.internal Ready master 322d v1.5.1
ip-172-20-153-170.ec2.internal Ready <none> 322d v1.5.1
ip-172-20-160-119.ec2.internal Ready master 322d v1.5.1
ip-172-20-162-94.ec2.internal Ready <none> 316d v1.5.1
ip-172-20-166-194.ec2.internal Ready <none> 322d v1.5.1
ip-172-20-79-1.ec2.internal Ready <none> 112d v1.5.1
ip-172-20-92-163.ec2.internal Ready <none> 322d v1.5.1
Further, a kube-proxy pod that matches this “missing” node's IP does show up, but is killed and relaunched every 30 seconds.
kubectl get pods
kube-proxy-ip-172-20-181-122.ec2.internal 1/1 Running 0 17s
I ended up manually deleting the EC2 instance. The autoscaler immediately relaunched a new instance and everything worked fine after.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.