简体   繁体   English

Kube.netes Pod 反亲和性 - 基于 label 均匀分布 Pod?

[英]Kubernetes Pod anti-affinity - evenly spread pods based on a label?

We are finding that our Kube.netes cluster tends to have hot-spots where certain nodes get far more instances of our apps than other nodes.我们发现我们的 Kube.netes 集群往往有热点,其中某些节点比其他节点获得更多的应用程序实例。

In this case, we are deploying lots of instances of Apache Airflow, and some nodes have 3x more web or scheduler components than others.在这种情况下,我们正在部署 Apache Airflow 的大量实例,并且一些节点的 web 或调度程序组件比其他节点多 3 倍。

Is it possible to use anti-affinity rules to force a more even spread of pods across the cluster?是否可以使用反亲和性规则强制 pod 在集群中更均匀地分布?

Eg "prefer the node with the least pods of label component=airflow-web ?"例如“更喜欢 label component=airflow-web中 pod 最少的节点?”

If anti-affinity does not work, are there other mechanisms we should be looking into as well?如果反亲和性不起作用,我们是否还应该研究其他机制?

Try adding this to the Deployment/StatefulSet .spec.template :尝试将此添加到 Deployment/StatefulSet .spec.template

      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: "component"
                  operator: In
                  values:
                  - airflow-web
              topologyKey: "kubernetes.io/hostname"

Have you tried configuring the kube-scheduler ?您是否尝试过配置kube-scheduler

kube-scheduler selects a node for the pod in a 2-step operation: kube-scheduler 通过两步操作为 pod 选择一个节点:

  • Filtering : finds the set of Nodes where it's feasible to schedule the Pod. Filtering :找到可以调度 Pod 的节点集。
  • Scoring : ranks the remaining nodes to choose the most suitable Pod placement. Scoring :对剩余节点进行排名以选择最合适的 Pod 放置。

Scheduling Policies : can be used to specify the predicates and priorities that the kube-scheduler runs to filter and score nodes.调度策略:可用于指定 kube-scheduler 运行以过滤和评分节点的谓词优先级

kube-scheduler --policy-config-file <filename>

One of the priorities for your scenario is:您的方案的优先事项之一是:

  • BalancedResourceAllocation : Favors nodes with balanced resource usage. BalancedResourceAllocation :有利于资源使用平衡的节点。

The right solution here is pod topology spread constraints: https://kube.netes.io/blog/2020/05/introducing-podtopologyspread/这里正确的解决方案是 pod 拓扑传播约束: https://kube.netes.io/blog/2020/05/introducing-podtopologyspread/

Anti-affinity only works until each node has at least 1 pod.反亲和性仅在每个节点至少有 1 个 pod 时才有效。 Spread constraints actually balances based on the pod count per node. Spread constraints 实际上是基于每个节点的 pod 数量来平衡的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Kubernetes 反关联规则将部署 Pod 传播到至少 2 个节点 - Kubernetes anti-affinity rule to spread Deployment Pods to at least 2 nodes kubernetes 亲和与反亲和 - kubernetes affinity & anti-affinity Pod 反亲和性和重新平衡 Pod - Pod anti-affinity and re-balancing pods Kubernetes 中的会话反亲和性 - Session anti-affinity in Kubernetes Kube.netes Pod Scheduling failed with 1 node(s) didn't match pod affinity/anti-affinity - Kubernetes Pod Scheduling failed with 1 node(s) did not match pod affinity/anti-affinity Kubernetes podAntiAffinity 影响部署 - FailedScheduling - 不匹配 pod 亲和性/反亲和性 - Kubernetes podAntiAffinity affects deployment - FailedScheduling - didn't match pod affinity/anti-affinity kubernetes调度程序是否支持反关联性? - Does the kubernetes scheduler support anti-affinity? 如何确定以编程方式创建的 pod 的亲和力/反亲和力是什么? - How to determine what the affinity/anti-affinity of programmatically created pod are? 由于反关联规则,升级到 Kubernetes 1.21 后 Pod 不再可部署 - Pod no longer deployable after upgrading to Kubernetes 1.21 due to anti-affinity rules Openshift Origin 1.5.1 Pod 对 DeploymentConfig 的反亲和性不起作用 - Openshift Origin 1.5.1 Pod anti-affinity on DeploymentConfig not working
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM