简体   繁体   English

如何停止Kubernetes控制飞机吊舱?

[英]How can I stop Kubernetes control plane pods?

Just curious, with mesos I'm used to being able to do systemctl stop mesos-master and systemctl start mesos-master (if I need to bounce it for some reason). 只是好奇,我已经习惯于使用mesos来使systemctl stop mesos-mastersystemctl start mesos-master (如果我出于某些原因需要将其退回)。 With k8s, there are multiple components to 'stop' in the control plane, such as apiserver, controller-manager, etc. 使用k8,在控制平面中有多个要“停止”的组件,例如apiserver,控制器-管理器等。

When creating a cluster with kubeadm, it runs the control plane as pods (no replica set, or anything like that, perhaps because I only have a single master at the moment). 使用kubeadm创建集群时,它会以Pod的形式运行控制平面(没有副本集或类似的内容,也许是因为目前我只有一个主服务器)。

What's the best way to stop the things in the control plane and then start them again, without tearing down the cluster? 在控制平面中停下来然后重新启动它们而不破坏群集的最佳方法是什么?

Kubernetes cluster is divided itself into micro-service which means each components should be independent of each other.If one of the component fail it should not affect other components in order to avoid terrible cascading effect. Kubernetes集群本身分为微服务,这意味着每个组件应该彼此独立,如果其中一个组件发生故障,则不应影响其他组件,以避免可怕的级联效应。

Let's begin with Linux Kernel . 让我们从Linux Kernel开始。 It makes sure that Systemd is healthy and doing its job. 它可以确保Systemd正常运行。 Now Kubeadm make sure that Kubelet (In master Node) is running as systemd service . 现在, Kubeadm确保Kubelet(在主节点中)作为systemd service运行。 You can check it by following command 您可以通过以下命令进行检查

systemctl status kubelet

Kubelet(in the master node) make sure that control plane components which are etcd,kube-apiserver,kube-controller-manager and kube-scheduler running as pod by docker engine however, they are systemd service.As they are using host networking and host socket in their manifest files. Kubelet(在主节点中)确保由docker引擎作为Pod运行的etcd,kube-apiserver,kube-controller-manager和kube-scheduler的控制平面组件是系统化服务。主机套接字在清单文件中。 You can check their status by using systemctl and journalctl . 您可以使用systemctljournalctl检查它们的状态。

 systemctl status kube-apiserver

Pods in the Worker nodes, use Pod Network by using CNI Plugins. 在Worker节点中的Pod,请通过CNI插件使用Pod Network。

Now K8s cluster is alive and Kube-apiServer and other components are healthy. 现在,K8s集群处于活动状态,并且Kube-apiServer和其他组件正常运行。 you can feed all other k8s resources such as deployment , replicaSet, service etc and will be deployed to the worker nodes. 您可以提供所有其他k8s资源,例如Deployment,copySet,service等,并将被部署到工作节点。 It will work according to your desire which will be a desired state in etcd. 它会根据您的期望进行工作,而这将是etcd中的理想状态

Once deployed resources(pods ,service etc) are running in the worker nodes.Master node's responsibility is to make sure desired state === current State . 一旦部署的资源(pod,service等)在工作节点中运行,主节点的职责就是确保所需状态=== current State

If Master Node is dead , Worker node will be orphan which means your current state would be final state. 如果主节点已死,则工作节点将为孤立节点,这意味着您的当前状态为最终状态。

Answer to your question 回答你的问题

You can start each components in the Master node but keeping in the mind the dependancy of them. 您可以在“主”节点中启动每个组件,但要记住它们的依赖性。

Examples 例子

If Kube api-server fails , other components (kube-scheduler,kube-controller-manager) won't be able to talk with etcd (source of the truth). 如果Kube api-server失败,则其他组件(kube-scheduler,kube-controller-manager)将无法与etcd对话(真相的来源)。

Kube-Controller-manager is further divided into controllers such as replicaSet Controller, Deployment Controller, Service Controller , etc. They mind their own business and make sure that desired state === current state. Kube-Controller-manager进一步划分为控制器,例如copysetSet控制器,Deployment Controller,Service Controller等等。他们关注自己的业务,并确保所需状态===当前状态。 The interesting thing is If one of the controllers fails in Kube-controller-manager will stop all of the controllers and terminate itself. 有趣的是,如果Kube-controller-manager中的一个控制器发生故障,它将停止所有控制器并自行终止。 Now Kubelet will make it up agian. 现在,Kubelet将其制作成agian。

In conclusion,We need to make sure that our Master node does not have any single point of failure that's why we always wish to have highly available control plane. 总之,我们需要确保我们的主节点没有任何单点故障,这就是为什么我们一直希望拥有高度可用的控制平面。

The Kubernetes control plane pods are often deployed as Static Pods . Kubernetes控制平面吊舱通常被部署为静态吊舱 These are not managed by any kind of Deployment controller, but are defined in static (hence the name) configuration files that are placed in a configuration directory (like for example /etc/kubelet.d/ or /etc/kubernetes/manifests , depending on how your cluster is set up). 这些文件不受任何类型的Deployment Controller的管理,而是在放置在配置目录(例如/etc/kubelet.d//etc/kubernetes/manifests )中的静态(因此称为名称)配置文件中定义的有关如何设置群集的信息)。 These definition files are picked up by the Kubelet running on the Kubernetes master node that creates the respective pods. 这些定义文件由在创建相应容器的Kubernetes主节点上运行的Kubelet拾取。

According to the documentation, you can stop/delete static pods simply by removing the respective configuration files, and start/create them again by creating new files: 根据文档,您可以简单地通过删除相应的配置文件来停止/删除静态容器,并通过创建新文件来再次启动/创建它们:

Running kubelet periodically scans the configured directory ( /etc/kubelet.d in our example) for changes and adds/removes pods as files appear/disappear in this directory. 运行kubelet会定期扫描配置的目录(在我们的示例中为/etc/kubelet.d )以查找更改,并随着文件在此目录中出现/消失而添加/删除pod。

 [joe@my-node1 ~] $ mv /etc/kubelet.d/static-web.yaml /tmp [joe@my-node1 ~] $ sleep 20 [joe@my-node1 ~] $ docker ps // no nginx container is running [joe@my-node1 ~] $ mv /tmp/static-web.yaml /etc/kubelet.d/ [joe@my-node1 ~] $ sleep 20 [joe@my-node1 ~] $ docker ps CONTAINER ID IMAGE COMMAND CREATED ... e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago 

To temporarily disable/enable these pods, simply move the definition files to a safe location and back again: 要临时禁用/启用这些窗格,只需将定义文件移至安全位置并再次返回:

$ mv /etc/kubelet.d/*.yaml /tmp   # Disable static pods
$ mv /tmp/*.yaml /etc/kubelet.d   # Re-enable static pods

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM