简体   繁体   English

使用确定性主机名在每个节点上运行一个Pod

[英]Running one pod per node with deterministic hostnames

I have what I believe is a simple goal, but I can't figure out how to get Kubernetes to play ball. 我有一个简单的目标,但我不知道如何让Kubernetes发挥作用。

For my particular application, I am trying to deploy a number of replicas of a docker image that is a worker for another service. 对于我的特定应用程序,我尝试部署多个docker映像的副本,这些映像是另一项服务的工作程序。 This system uses the hostname of the worker to distinguish between workers that are running at the same time. 该系统使用工作程序的主机名来区分同时运行的工作程序。

I would like to be able to deploy a cluster where every node runs a worker for this service. 我希望能够部署一个群集,其中每个节点都为此服务运行一个工作程序。

The problem is that the master also keeps track of every worker that ever worked for it, and displays these in a status dashboard. 问题在于,主服务器还跟踪所有为其工作的工人,并将其显示在状态仪表板上。 The intent is that you spin up a fixed number of workers by hand and leave it that way. 目的是您手动旋转固定数量的工人,然后以这种方式离开。 I would like to be able to resize my cluster and have the number of workers change accordingly. 我希望能够调整群集的大小并相应地更改工作人员的数量。

This seems like a perfect application for DaemonSet, except that then the hostnames are randomly generated and the master ends up tracking many orphaned hostnames. 这似乎是DaemonSet的理想应用程序,只是主机名是随机生成的,而主机最终会跟踪许多孤立的主机名。

An alternative might be StatefulSet, which gives us deterministic hostnames, but I can't find a way to force it to scale to one pod per node. 另一种可能是StatefulSet,它为我们提供了确定性的主机名,但是我找不到一种方法来强制它扩展到每个节点一个pod。

The system I am running is open source and I am looking into changing how it identifies workers to avoid this mess, but I was wondering if there was any sensible way to dynamically scale a StatefulSet to the number of nodes in the cluster. 我正在运行的系统是开源的,我正在研究更改其识别工作人员的方式,以避免发生这种混乱情况,但是我想知道是否有任何明智的方法来将StatefulSet动态扩展到集群中的节点数。 Or any way to achieve similar functionality. 或以任何方式实现类似功能。

The one way is to use nodeSelector , but I totally agree with @Markus: the more correct and advanced way is to use anti-affinity . 一种方法是使用nodeSelector ,但我完全同意@Markus:更正确和高级的方法是使用anti-affinity This is really powerful and at the same time simple solution to prevent scheduling pods with the same labels to 1 node. 这确实非常强大,同时又提供了简单的解决方案,可防止将具有相同标签的Pod调度到1个节点。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM