简体   繁体   English

k8s pod 以在 EKS 中的现货和按需实例中进行调度

[英]k8s pods to schedule in both spot and on-demand instances in EKS

we are planning to introduce AWS spot instances in production ( non-prod is running with spot already ).我们计划在生产中引入 AWS spot 实例(非 prod 已经在 spot 上运行)。 In order to achieve HA we are running HPA with minimum replicas 2 for all critical deployments.为了实现 HA,我们正在为所有关键部署运行具有最少副本 2 的 HPA。 Because of the spot instances behaviour we want to run on-demand instances and one pod should be running on on-demand instances for the same由于 spot 实例行为,我们希望运行按需实例,并且一个 pod 应该在相同的按需实例上运行

Question:问题:

Is there anyway i can split pods to get launch one pod of the deployment in on-demand and all the other pods (another one since minimum is 2 and if HPA increase the pods ) of the same deployment in spot instances.无论如何,我是否可以拆分 pod 以按需启动部署的一个 pod,并在 spot 实例中启动同一部署的所有其他 pod(另一个,因为最小值为 2,如果 HPA 增加 pod)。

We already using nodeaAffinity and podAntiAffinity since we have multiple node groups for different reasons.我们已经在使用 nodeaAffinity 和 podAntiAffinity,因为出于不同的原因我们有多个节点组。 Below is the snippet.下面是片段。

        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: category
                operator: In
                values:
                - <some value>
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: <lable key>
                operator: In
                values: 
                - <lable value>
            topologyKey: "kubernetes.io/hostname"
    

Following up on your last message:跟进你的最后一条消息:

Will check the two deployments with same label in non-prod then we will update here.将在非生产环境中检查具有相同 label 的两个部署,然后我们将在此处更新。

Just wondering how this went.只是想知道这是怎么回事。 Was there any issues/gotchas from this setup that you could share.您是否可以分享此设置中的任何问题/陷阱。 Are you currently using this setup, or have you moved on to another setup.您当前是在使用此设置,还是已转到其他设置。

Short answer is No .简短的回答是否No No such way to define per replica.没有这样的方式来定义每个副本。 As you are already using podAntiAffinity, just by adding the same pod labels, you can ensure no two replicas stays in the same host (if that's not what you are already doing).由于您已经在使用 podAntiAffinity,只需添加相同的 pod 标签,您就可以确保没有两个副本留在同一主机中(如果这不是您已经在做的)。 And then use spotInterruption Handler to drain and reschedule without abrupt downtimes during spot interruptions.然后在现场中断期间使用spotInterruption Handler 来排空和重新安排时间,而不会突然停机。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM