简体   繁体   English

如何根据Pod的“就绪” /“未就绪”状态使Kubernetes扩展部署?

[英]How do I make Kubernetes scale my deployment based on the “ready”/ “not ready” status of my Pods?

I have a deployment with a defined number of replicas . 我有一个具有定义数量的replicas的部署。 I use readiness probe to communicate if my Pod is ready/ not ready to handle new connections – my Pods toggle between ready / not ready state during their lifetime. 如果Pod已准备好/未准备好处理新的连接,我会使用就绪探针进行通信-我的Pod在生命周期内会在ready / not ready状态之间切换。

I want Kubernetes to scale the deployment up/ down to ensure that there is always the desired number of pods in a ready state. 我希望Kubernetes扩大/缩小部署规模,以确保在ready状态下始终有所需数量的Pod。

Example: 例:

  • If replicas is 4 and there are 4 Pods in ready state, then Kubernetes should keep the current replica count. 如果replicas为4并且有4个Pod处于ready状态,则Kubernetes应该保留当前副本数。
  • If replicas is 4 and there are 2 ready pods and 2 not ready pods, then Kubernetes should add 2 more pods. 如果replicas是4个,并且有2个ready Pod和2个not ready Pod,则Kubernetes应该再增加2个Pod。

How do I make Kubernetes scale my deployment based on the "ready"/ "not ready" status of my Pods? 如何使Kubernetes根据Pod的“就绪” /“未就绪”状态扩展部署?

Ensuring you always have 4 pods running can be done by specifying the replicas property in your deployment definition: 通过在部署定义中指定副本属性,可以确保始终有4个Pod在运行:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 4  #here we define a requirement for 4 replicas
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Kubernetes will ensure that if any pods crash, replacement pods will be created so that a total of 4 are always available. Kubernetes将确保如果有任何Pod崩溃,将创建替换Pod,以便始终有4个可用。

I don't think this is possible. 我认为这是不可能的。 If pod is not ready, k8 will not make it ready as It is something which releated to your application.Even if it create new pod, how readiness will be guaranted. 如果pod尚未准备就绪,则k8不会使其准备就绪,因为这取决于您的应用程序。即使它创建了新的pod,也将保证就绪状态。 So you have to resolve the reasons behind non ready status and then k8. 因此,您必须先解决未就绪状态的原因,然后再解决k8。 Only thing k8 does it keep them away from taking world load to avoid request failure k8唯一要做的事情就是让他们远离繁重的工作,避免请求失败

You cannot schedule deployments on unhealthy nodes in the cluster. 您无法在群集中运行状况不佳的节点上安排部署。 The master api will only create pods on nodes which are healthy and meet the quota criteria to create any additional pods on the nodes which are schedulable. 主API将仅在运行状况良好且符合配额标准的节点上创建容器,以在可调度的节点上创建任何其他容器。

Moreover, what you define is called an auto-heal concept of k8s which in basic terms will be taken care of. 而且,您定义的被称为k8的自动修复概念,将在基本术语中加以注意。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM