简体   繁体   中英

kubectl get nodes unable to connect to the server on AWS EC2 Instance

I have deployed kubernetes cluster on AWS EC2 Ubuntu Nodes, one master node and one worker node. Its a free tier t2.micro machine with 1 CPU.

I installed and configured everything and on day 1 everything works fine. kubectl get nodes command was responding without any delay and i was able to create MYSQL deployment.

Next day when i tried kubectl get nodes i was getting The connection to the server :6443 was refused - did you specify the right host or port? and even sometimes i am getting Unable to connect to the server: net/http: TLS handshake timeout

I checked the following things to verify whether kubernetes is working or not sudo systemctl status kubelet it was in active status

Even its surprising that sometime kubectl get nodes works perfectly and sometime it return The connection to the server :6443 was refused - did you specify the right host or port?

I am not able to understand how to fix this issue ?

This could be caused by a mismatch between the actual master node IP ( apiserver ) and the server entry configured in your ~/.kube/config . Verify that they both match and if not, just update the server entry.

Keep in mind that EC2 instances can and will fail (I've seen them randomly crashing even on managed clusters). This could explain why one day your control plane was accessible on one IP and no longer accessible on the next day.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM