[英]Kubernetes cluster Nodes not creating automatically when other lost in Kubespray
I have successfully deployed a multi master Kubernetes cluster using the repo https://github.com/kubernetes-sigs/kubespray and everything works fine. 我已经使用仓库https://github.com/kubernetes-sigs/kubespray成功部署了多主Kubernetes集群,并且一切正常。 But when I stop/terminate a node in the cluster, new node is not joining to the cluster.I had deployed kubernetes using KOPS, but the nodes were created automatically, when one deletes. 但是当我停止/终止集群中的一个节点时,新节点并没有加入集群。我已经使用KOPS部署了kubernetes,但是当一个节点被删除时,这些节点是自动创建的。 Is this the expected behaviour in kubespray? 这是kubespray中的预期行为吗? Please help.. 请帮忙..
It is expected behavior because kubespray doesn't create any ASGs, which are AWS-specific resources. 这是预期的行为,因为kubespray不会创建任何AWS特定资源的ASG。 One will observe that kubespray only deals with existing machines; 将会看到kubespray只处理现有的机器。 they do offer some terraform toys in their repo for provisioning machines, but kubespray itself does not get into that business. 他们提供在他们的回购一些terraform玩具供应机器,但kubespray本身不会进入该业务。
You have a few options available to you: 您有几种选择:
scale.yml
使用scale.yml
进行scale.yml
etcd
machines (presumably so kubespray can issue etcd certificates for the new Node 创建一个包含它的清单文件,然后创建etcd
机器(大概是kubespray可以为新节点颁发etcd证书) scale.yml
playbook 调用scale.yml
剧本 You may enjoy AWX in support of that. 您可以享受AWX的支持。
kubeadm join
使用普通的kubeadm join
This is the mechanism I use for my clusters, FWIW 这是我用于群集的机制FWIW
Create a kubeadm join token using kubeadm token create --ttl 0
(or whatever TTL you feel comfortable using) 使用kubeadm token create --ttl 0
(或您喜欢使用的任何TTL)创建kubeadm连接令牌
You'll only need to do this once, or perhaps once per ASG, depending on your security tolerances 您只需要执行一次,或者每个ASG一次,这取决于您的安全容忍度
Use the cloud-init mechanism to ensure that docker
, kubeadm
, and kubelet
binaries are present on the machine 使用云初始化机制,以确保docker
, kubeadm
和kubelet
二进制文件存在于机器上
You are welcome to use an AMI for doing that, too, if you enjoy building AMIs 如果您喜欢构建AMI,也欢迎您使用AMI进行此操作
kubeadm join
as described here: https://kubernetes.io/docs/setup/independent/high-availability/#install-workers 然后kubeadm join
如下所述调用kubeadm join
: https : kubeadm join
There are plenty of "machine controller" components that aim to use custom controllers inside Kubernetes to manage your node pools declaratively. 有很多“机器控制器”组件 ,旨在使用Kubernetes内部的自定义控制器来声明式管理您的节点池。 I don't have experience with them, but I believe they do work. 我没有与他们合作的经验,但是我相信他们确实可以工作。 That link was just the first one that came to mind, but there are others, too 该链接只是我想到的第一个链接,但也有其他链接
Our friends over at Kubedex have an entire page devoted to this question 我们在Kubedex的朋友整页都专门讨论了这个问题
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.