简体   繁体   中英

init a kubernetes cluster with kubeadm but public IP on aws

I am trying to follow this tutorial: https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-ubuntu-18-04

Important difference: I need to run the master on a specific node, and the worker nodes are from different regions on AWS.

So it all went well until I wanted to join the nodes (step 5). The command succeeded but kubectl get nodes still only showed the master node.

I looked at the join command and it contained the master 's private ip address: join 10.1.1.40 . I guess that can not work if the workers are in a different region (note: later we probably need to add nodes from different providers even, so if there is no important security threat, it should work via public IPs).

So while kubeadm init pod-network-cidr=10.244.0.0/16 initialized the cluster but with this internal IP, I then tried with kubeadm init --apiserver-advertise-address <Public-IP-Addr> --apiserver-bind-port 16443 --pod-network-cidr=10.244.0.0/16

But then it always hangs, and init does not complete. The kubelet log prints lots of

E0610 19:24:24.188347 1051920 kubelet.go:2267] node "ip-xxxx" not found

where "ip-xxxx" seems to be the master's node hostname on AWS.

I think what made it work is that I set the master's hostname to its public DNS name, and then used that as --control-plane-endpoint argument..., without --apiserver-advertise-address (but with the --apiserver-bind-port as I need to run it on another port).

Need to have it run longer to confirm but so far looks good.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM