简体   繁体   English

Kubernetes-什么是kube系统吊舱,删除它们是否安全?

[英]Kubernetes - What are kube-system pods and is it safe to delete them?

I currently have a cluster running on GCloud which I created with 3 nodes. 我目前有一个在GCloud上运行的群集,该群集由3个节点创建。 This is what I get when I run kubectl describe nodes 这是我运行kubectl describe nodes

Name:           node1
Capacity:
cpu:        1
memory: 3800808Ki
pods:       40
Non-terminated Pods:        (3 in total)
Namespace           Name                                    CPU Requests    CPU Limits  Memory Requests Memory Limits
─────────           ────                                    ────────────    ──────────  ─────────────── ─────────────
default         my-pod1                                 100m (10%)  0 (0%)      0 (0%)      0 (0%)
default         my-pod2                             100m (10%)  0 (0%)      0 (0%)      0 (0%)
kube-system         fluentd-cloud-logging-gke-little-people-e39a45a8-node-75fn      100m (10%)  100m (10%)  200Mi (5%)  200Mi (5%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests    CPU Limits  Memory Requests Memory Limits
────────────    ──────────  ─────────────── ─────────────
300m (30%)  100m (10%)  200Mi (5%)  200Mi (5%)

Name:           node2
Capacity:
cpu:        1
memory: 3800808Ki
pods:       40
Non-terminated Pods:        (4 in total)
Namespace           Name                                    CPU Requests    CPU Limits  Memory Requests Memory Limits
─────────           ────                                    ────────────    ──────────  ─────────────── ─────────────
default         my-pod3                             100m (10%)  0 (0%)      0 (0%)      0 (0%)
kube-system         fluentd-cloud-logging-gke-little-people-e39a45a8-node-wcle      100m (10%)  100m (10%)  200Mi (5%)  200Mi (5%)
kube-system         heapster-v11-yi2nw                          100m (10%)  100m (10%)  236Mi (6%)  236Mi (6%)
kube-system         kube-ui-v4-5nh36                            100m (10%)  100m (10%)  50Mi (1%)   50Mi (1%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests    CPU Limits  Memory Requests Memory Limits
────────────    ──────────  ─────────────── ─────────────
400m (40%)  300m (30%)  486Mi (13%) 486Mi (13%)

Name:           node3
Capacity:
cpu:        1
memory: 3800808Ki
pods:       40
Non-terminated Pods:        (3 in total)
Namespace           Name                                    CPU Requests    CPU Limits  Memory Requests Memory Limits
─────────           ────                                    ────────────    ──────────  ─────────────── ─────────────
kube-system         fluentd-cloud-logging-gke-little-people-e39a45a8-node-xhdy      100m (10%)  100m (10%)  200Mi (5%)  200Mi (5%)
kube-system         kube-dns-v9-bo86j                           310m (31%)  310m (31%)  170Mi (4%)  170Mi (4%)
kube-system         l7-lb-controller-v0.5.2-ae0t2                       110m (11%)  110m (11%)  70Mi (1%)   120Mi (3%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests    CPU Limits  Memory Requests Memory Limits
────────────    ──────────  ─────────────── ─────────────
520m (52%)  520m (52%)  440Mi (11%) 490Mi (13%)

Now, as you can see, I have 3 pods, 2 on node1 and 1 on node2. 现在,您可以看到,我有3个Pod,其中2个位于node1上,1个位于node2上。 What I would like to do is to move all pods on node1 and delete the two other nodes. 我想做的是移动node1上的所有Pod并删除其他两个节点。 However, there seem to be pods belonging to the kube-system namespace and I don't know what effect deleting them might have. 但是,似乎有吊舱属于kube-system命名空间,并且我不知道删除它们可能会有什么作用。

I can tell that the pods named fluentd-cloud-logging... or heapster.. are used for logging and computer resources usage, but I don't really know if I can move the pods kube-dns-v9-bo86j and l7-lb-controller-v0.5.2-ae0t2 to another node without repercussions. 我可以说名为fluentd-cloud-logging...heapster..heapster..用于日志记录和计算机资源的使用,但是我真的不知道是否可以移动Pod kube-dns-v9-bo86jl7-lb-controller-v0.5.2-ae0t2到另一个节点而不会产生影响。

Can anyone help with some insight as to how should I proceed? 任何人都可以提供一些有关我应该如何进行的见解的帮助吗?

Thank you very much. 非常感谢你。

Killing them so that they'll be rescheduled on another node is perfectly fine. 杀死它们,以便将它们重新安排在另一个节点上是完全可以的。 They can all be rescheduled other than the fluentd pods, which are bound one to each node. 除了流利的Pod(它们绑定到每个节点之一)以外,它们都可以重新计划。

If you want to downsize your cluster, you can just delete two of the three nodes and let the system reschedule any pods that were lost when the nodes were removed. 如果要缩小群集的大小,则只需删除三个节点中的两个,然后让系统重新安排在删除节点时丢失的任何Pod。 If there isn't enough space on the remaining node you may see the pods go permanently pending. 如果剩余节点上的空间不足,您可能会看到Pod永久挂起。 Having the kube-system pods pending isn't ideal because each of them performs a "system function" for your cluster (eg DNS, monitoring, etc) and without them running your cluster won't be fully functional. 使kube系统吊舱挂起不是理想的,因为它们每个都为您的群集执行“系统功能”(例如DNS,监视等),并且如果没有它们运行您的群集将无法完全正常工作。

You can also disable some of the system pods if you don't need their functionality using the gcloud container clusters update command. 如果您不需要某些系统Pod,可以使用gcloud container clusters update命令禁用它们的功能。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM