繁体   English   中英

Kubernetes - 如何将 pod 分配给具有特定 label 的节点

[英]Kubernetes - how to assign pods to nodes with certain label

假设我有以下带有标签env=stagingenv=production的节点

server0201     Ready    worker   79d   v1.18.2   10.2.2.22     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0202     Ready    worker   79d   v1.18.2   10.2.2.23     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0203     Ready    worker   35d   v1.18.3   10.2.2.30     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     staging
server0301     Ready    worker   35d   v1.18.3   10.2.3.21     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0302     Ready    worker   35d   v1.18.3   10.2.3.29     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0303     Ready    worker   35d   v1.18.0   10.2.3.30     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     staging
server0304     Ready    worker   65d   v1.18.2   10.2.6.22     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production

我尝试使用nodeSelectornodeAffinity ,但是当我的选择器 label 为env=staging时,无论我创建了多少副本,我所有的 pod 都一直驻留在server0203并且永远不会驻留在server0303

如果我使用env=production也一样,它只会登陆 server0201。

我应该怎么做才能确保我的 pod 均匀分布到我分配了这些标签的节点上?

这是我的部署规范

apiVersion: apps/v1
kind: Deployment
metadata:
  name: helloworld
  namespace: gab
spec:
  selector:
    matchLabels:
      app: helloworld
  replicas: 2 # tells deployment to run 1 pods matching the template
  template: # create pods using pod definition in this template
    metadata:
      labels:
        app: helloworld
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: env
                operator: Equals
                values:
                - staging
      containers:
      - name: helloworld
        image: karthequian/helloworld:latest
        ports:
        - containerPort: 80

工作节点中没有污点

kubectl get nodes -o json | jq '.items[].spec.taints'
[
  {
    "effect": "NoSchedule",
    "key": "node-role.kubernetes.io/master"
  }
]
[
  {
    "effect": "NoSchedule",
    "key": "node-role.kubernetes.io/master"
  }
]
[
  {
    "effect": "NoSchedule",
    "key": "node-role.kubernetes.io/master"
  }
]
null
null
null
null
null
null
null

显示所有标签

server0201     Ready    worker   80d   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0202,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0202     Ready    worker   80d   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0203,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0203     Ready    worker   35d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=staging,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0210,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0301     Ready    worker   35d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0301,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0302     Ready    worker   35d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0309,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0303     Ready    worker   35d   v1.18.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=staging,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0310,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0304     Ready    worker   65d   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0602,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker

玩了一圈之后,我意识到NodeSelectorPodAffinity没有问题。 事实上,我什至可以通过在我的命名空间中使用node-selector注释来实现我的问题想要实现的目标。

apiVersion: v1
kind: Namespace
metadata:
  name: gab
  annotations:
    scheduler.alpha.kubernetes.io/node-selector: env=production
spec: {}
status: {}    

只要我的部署在命名空间内,节点选择器就可以工作。

kind: Deployment
metadata:
  name: helloworld
  namespace: gab
spec:
  selector:
    matchLabels:
      app: helloworld
  replicas: 10 # tells deployment to run 1 pods matching the template
  template: # create pods using pod definition in this template
    metadata:
      labels:
        app: helloworld
    spec:
      containers:
      - name: helloworld
        image: karthequian/helloworld:latest
        ports:
        - containerPort: 80

现在为什么它在一开始对我有用是因为我的staging标记节点的第二个节点比我一直居住的那个节点的利用率略高。

Resource           Requests     Limits
  --------           --------     ------
  cpu                3370m (14%)  8600m (35%)
  memory             5350Mi (4%)  8600Mi (6%)
  ephemeral-storage  0 (0%)       0 (0%)

我一直登陆的节点是

  Resource           Requests    Limits
  --------           --------    ------
  cpu                1170m (4%)  500100m (2083%)
  memory             164Mi (0%)  100Mi (0%)
  ephemeral-storage  0 (0%)      0 (0%)

当我测试并切换到production时,因为有更多的节点,它被分发到几个。

因此,我认为,调度程序根据Server load平衡 pod(我可能是错的)而不是尝试均匀分布

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM