简体   繁体   English

在不同节点上运行两个kubernetes pod

[英]Running two kubernetes pods on differents nodes

Is there a way to tell Kubernetes to never run two pods on the same node, an example I have two pods replicas, I want them to be always distributed over zone1/zone2 and never in the same zone together. 有没有办法告诉Kubernetes永远不会在同一个节点上运行两个pod,例如我有两个pods副本,我希望它们总是分布在zone1/zone2而不是在同一个区域中。

apiVersion: app/v1
kind: Deployment
metadata:
  name: testApp
  labels:
    app: testApp-front
  namespace: 
spec:
  replicas: 2
  selector:
    matchLabels:
      app: testApp-front
  template:
    metadata:
      labels:
        app: testApp-front
    spec:      
      nodeSelector:
        failure-domain.beta.kubernetes.io/zone: zone1

Seems like it can be done with Interpod Affinity you can see : 似乎可以通过Interpod Affinity完成,你可以看到:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
spec:
  selector:
    matchLabels:
      app: testApp-front
  replicas: 3
  template:
    metadata:
      labels:
        app: testApp-front
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - testApp-front
            topologyKey: "kubernetes.io/hostname"
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - store
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: web-testApp-front
        image: nginx:1.12-alpine

you can see the full example here 你可以在这里看到完整的例子

I think you need the concept of pod anti-affinity. 我认为你需要pod反亲和力的概念。 This is within one cluster to take care that pods do not reside on one worker-node. 这是在一个集群内,以确保pod不驻留在一个工作节点上。 https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity

非常简单,你可以使用deamon set来运行不同节点中的每个pod,或者其他人说你可以使用pod反关联

The k8s scheduler is a smart piece of software. k8s调度程序是一个智能软件。

  1. The kubernetes scheduler will first determine all possible nodes where a pod can be deployed based on your affinity/anti-affinity/resource limits/etc. kubernetes调度程序将首先根据您的亲和力/反关联性/资源限制/等确定可以部署pod的所有可能节点。

  2. Afterward, the scheduler will find the best node where the pod can be deployed. 之后,调度程序将找到可以部署pod的最佳节点。 The scheduler will automatically schedule the pods to be on separate availability zones and on separate nodes if this is possible of course. 如果可能的话,调度程序将自动将pod安排在单独的可用区域和单独的节点上。

PS If you never want 2 replicas of a pod to be on the same node, define an anti-affinity rule. PS如果您从不希望pod的2个副本位于同一节点上,请定义反关联性规则。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 能否在Replication Controller中将两个Pod分配给Kubernetes上的两个不同节点? - Can Assign two Pods in Replication Controller to the two Different Nodes on Kubernetes? 没有节点可用于计划Pod-在没有VM的情况下本地运行Kubernetes - no nodes available to schedule pods - Running Kubernetes Locally with No VM 无法在Kubernetes中的不同节点上运行的pod之间进行通信 - Unable to communicate between pods running on different nodes in Kubernetes Kubernetes:kubectl 顶级节点/Pod 不工作 - Kubernetes: kubectl top nodes/pods not working 如何在Kubernetes集群上获取所有正在运行的POD - How to get all running PODs on Kubernetes cluster Kube.netes 为运行 JVM 的 pod 抛出 OOM - Kubernetes throwing OOM for pods running a JVM Kubernetes pod 未启动,在代理后面运行 - Kubernetes pods not starting, running behind a proxy Kubernetes:如何将Pod分配给具有特定标签的所有节点 - Kubernetes: how to assign pods to all nodes with sepcific label Kube.netes AutoScaler 或更改 AWS 中的 Desired Nodes 过早终止 Docker Pod - Kubernetes AutoScaler or changing Desired Nodes in AWS prematurely terminates Docker Pods Kubernetes:使用 kubeadm 构建集群后,节点/Pod 未使用 kubectl 显示 - Kubernetes: Nodes/Pods not showing with kubectl after building cluster with kubeadm
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM