简体   繁体   English

Kubernetes 反关联规则将部署 Pod 传播到至少 2 个节点

[英]Kubernetes anti-affinity rule to spread Deployment Pods to at least 2 nodes

I have the following anti-affinity rule configured in my k8s Deployment:我在 k8s 部署中配置了以下反关联规则:

spec:
  ...
  selector:
    matchLabels:
      app: my-app
      environment: qa
  ...
  template:
    metadata:
      labels:
        app: my-app
        environment: qa
        version: v0
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - my-app
            topologyKey: kubernetes.io/hostname

In which I say that I do not want any of the Pod replica to be scheduled on a node of my k8s cluster in which is already present a Pod of the same application.在其中我说我不希望在我的 k8s 集群的节点上安排任何 Pod 副本,该节点已经存在相同应用程序的 Pod。 So, for instance, having:因此,例如,具有:

nodes(a,b,c) = 3
replicas(1,2,3) = 3

replica_1 scheduled in node_a , replica_2 scheduled in node_b and replica_3 scheduled in node_c replica_1计划在NODE_A,replica_2计划在节点_Breplica_3计划在node_c

As such, I have each Pod scheduled in different nodes.因此,我将每个 Pod 安排在不同的节点上。

However, I was wondering if there is a way to specify that: "I want to spread my Pods in at least 2 nodes" to guarantee high availability without spreading all the Pods to other nodes, for example:但是,我想知道是否有一种方法可以指定:“我想将我的 Pod 分布在至少 2 个节点中”以保证高可用性,而不会将所有 Pod 分布到其他节点,例如:

nodes(a,b,c) = 3
replicas(1,2,3) = 3

replica_1 scheduled in node_a , replica_2 scheduled in node_b and replica_3 scheduled ( again ) in node_a replica_1计划在NODE_A,replica_2计划在节点_BNODE_A replica_3计划(再次

So, to sum up, I would like to have a softer constraint, that allow me to guarantee high availability spreading Deployment's replicas across at least 2 nodes, without having to launch a node for each Pod of a certain application.所以,总而言之,我想要一个更软的约束,它允许我保证将 Deployment 的副本分布在至少 2 个节点上的高可用性,而不必为某个应用程序的每个 Pod 启动一个节点。

Thanks!谢谢!

I think I found a solution to your problem.我想我找到了解决您问题的方法。 Look at this example yaml file:看看这个示例 yaml 文件:

spec:
  topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
    matchLabels:
      example: app
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - worker-1
            - worker-2
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 50
        preference:
          matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - worker-1

Idea of this configuration: I'm using nodeAffinity here to indicate on which nodes pod can be placed:这种配置的想法:我在这里使用 nodeAffinity 来指示 pod 可以放置在哪些节点上:

- key: kubernetes.io/hostname

and

values:
- worker-1
- worker-2

It is important to set the following line:设置以下行很重要:

- maxSkew: 1

According to the documentation :根据文档

maxSkew describes the degree to which Pods may be unevenly distributed. maxSkew描述了Pod可能分布不均的程度。 It must be greater than zero.它必须大于零。

Thanks to this, the difference in the number of assigned feeds between nodes will always be maximally equal to 1.因此,节点之间分配的提要数量的差异将始终最大等于 1。

This section:这个部分:

      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 50
        preference:
          matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - worker-1

is optional however, it will allow you to adjust the feed distribution on the free nodes even better.是可选的,但是它可以让您更好地调整免费节点上的提要分布。 Here you can find a description with differences between: requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution : 在这里您可以找到以下差异的描述: requiredDuringSchedulingIgnoredDuringExecutionpreferredDuringSchedulingIgnoredDuringExecution

Thus an example of requiredDuringSchedulingIgnoredDuringExecution would be "only run the pod on nodes with Intel CPUs" and an example preferredDuringSchedulingIgnoredDuringExecution would be "try to run this set of pods in failure zone XYZ, but if it's not possible, then allow some to run elsewhere".因此, requiredDuringSchedulingIgnoredDuringExecution的示例将是“仅在具有 Intel CPU 的节点上运行 pod”,而preferredDuringSchedulingIgnoredDuringExecution的示例将是“尝试在故障区域 XYZ 中运行这组 pod,但如果不可能,则允许一些在其他地方运行” .

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Kube.netes Pod 反亲和性 - 基于 label 均匀分布 Pod? - Kubernetes Pod anti-affinity - evenly spread pods based on a label? kubernetes 亲和与反亲和 - kubernetes affinity & anti-affinity 具有拓扑键的反亲和性规则部署失败:kubernetes.io/hostname-必需值:不能为空 - Deployment fails on anti-affinity rule with topologyKey: kubernetes.io/hostname - Required value: can not be empty Kubernetes 中的会话反亲和性 - Session anti-affinity in Kubernetes Kubernetes podAntiAffinity 影响部署 - FailedScheduling - 不匹配 pod 亲和性/反亲和性 - Kubernetes podAntiAffinity affects deployment - FailedScheduling - didn't match pod affinity/anti-affinity kubernetes调度程序是否支持反关联性? - Does the kubernetes scheduler support anti-affinity? Pod 反亲和性和重新平衡 Pod - Pod anti-affinity and re-balancing pods 当我通过 helm 启动 Milvus Cluster 时,遇到这个错误“0/4 nodes are available: 4 node(s) didn't satisfy existing pods anti-affinity rules.” - When I start Milvus Cluster by helm, I encounter this error "0/4 nodes are available: 4 node(s) didn't satisfy existing pods anti-affinity rules." Kube.netes Pod Scheduling failed with 1 node(s) didn't match pod affinity/anti-affinity - Kubernetes Pod Scheduling failed with 1 node(s) did not match pod affinity/anti-affinity 更多副本中的 Kubernetes pod - 反关联规则 - Kubernetes pod in more replicas - anti affinity rule
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM