[英]My kubernetes cluster IP address changed and now kubectl will no longer connect
kubeadm init
to setup my cluster (master node) and copied over the /etc/kubernetes/admin.conf $HOME/.kube/config
and all was well when using kubectl
.kubeadm init
来设置我的集群(主节点)并复制到/etc/kubernetes/admin.conf $HOME/.kube/config
并且在使用kubectl
时一切都很好。$HOME/.kube/config
so now I can no longer connect kubectl
$HOME/.kube/config
的 IP 地址不同,因此现在我无法再连接kubectl
So how do I regenerate the admin.conf now that I have a new IP address?那么,既然我有一个新的 IP 地址,我该如何重新生成 admin.conf 呢? Running
kubeadm init
will just kill everything which is not what I want.运行
kubeadm init
只会杀死所有不是我想要的东西。
The following command can be used to regenerate admin.conf可以使用以下命令重新生成admin.conf
kubeadm alpha phase kubeconfig admin --apiserver-advertise-address <new_ip>
However, if you use an IP instead of a hostname, your API-server certificate will be invalid.但是,如果您使用 IP 而不是主机名,则您的 API 服务器证书将无效。 So, either regenerate your certs ( kubeadm alpha phase certs renew apiserver ), use hostnames instead of IPs or add the insecure --insecure-skip-tls-verify flag when using kubectl
因此,要么重新生成您的证书( kubeadm alpha phase certs refresh apiserver ),使用主机名而不是 IP,要么在使用 kubectl 时添加insecure --insecure-skip-tls-verify标志
I found this solution on the internet and it works for me:我在互联网上找到了这个解决方案,它对我有用:
systemctl stop kubelet docker
cd /etc/
mv kubernetes kubernetes-backup
mv /var/lib/kubelet /var/lib/kubelet-backup
mkdir -p kubernetes
cp -r kubernetes-backup/pki kubernetes
rm kubernetes/pki/{apiserver.*,etcd/peer.*}
systemctl start docker
kubeadm init --ignore-preflight-errors=DirAvailable--var-lib-etcd
#Run "kubeadm reset" on all nodes if was this error "error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists"
cp kubernetes/admin.conf ~/.kube/config
kubectl get nodes --sort-by=.metadata.creationTimestamp
kubectl delete node $(kubectl get nodes -o jsonpath='{.items[(@.status.conditions[0].status=="Unknown")].metadata.name}')
kubectl get pods --all-namespaces
After These, Join your Slaves to Master.在这些之后,加入你的奴隶主。 Reference: https://medium.com/@juniarto.samsudin/ip-address-changes-in-kubernetes-master-node-11527b867e88
参考: https : //medium.com/@juniarto.samsudin/ip-address-changes-in-kubernetes-master-node-11527b867e88
You do not want to use kubeadm reset
.您不想使用
kubeadm reset
。 That will reset everything and you would have to start configuring your cluster again.这将重置所有内容,您将不得不再次开始配置集群。
Well, in your scenario, please have a look on the steps below:那么,在您的场景中,请查看以下步骤:
nano /etc/hosts
(update your new IP against YOUR_HOSTNAME
) nano /etc/hosts
(根据YOUR_HOSTNAME
更新您的新 IP) nano /etc/kubernetes/config
(configuration settings related to your cluster) here in this file look for the following params and update accordingly nano /etc/kubernetes/config
(与您的集群相关的配置设置)在此文件中查找以下参数并相应更新
KUBE_MASTER="--master=http://YOUR_HOSTNAME:8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://YOUR_HOSTNAME:2379" #2379 is default port
nano /etc/etcd/etcd.conf
( conf
related to etcd
) nano /etc/etcd/etcd.conf
(与etcd
相关的conf
)
KUBE_ETCD_SERVERS="--etcd-servers=http://YOUR_HOSTNAME/WHERE_EVER_ETCD_HOSTED:2379"
2379
is default port for etcd
. 2379
是etcd
默认端口。 and you can have multiple etcd
servers defined here comma separated您可以在此处定义多个
etcd
服务器,以逗号分隔
Restart kubelet
, apiserver
, etcd
services.重启
kubelet
、 apiserver
、 etcd
服务。
It is good to use hostname
instead of IP
to avoid such scenarios.最好使用
hostname
而不是IP
来避免这种情况。
Hope it helps!希望能帮助到你!
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.