简体   繁体   中英

Partially Rollout Kubernetes Pods

I have 1 node with 3 pods. I want to rollout a new image in 1 of the three pods and the other 2 pods stay with the old image. Is it possible?

Second question. I tried rolling out a new image that contains error and I already define the maxUnavailable. But kubernetes still rollout all pods. I thought kubernetes will stop rolling out the whole pods, once kubernetes discover an error in the first pod. Do we need to manually stop the rollout?

Here is my deployment script.

# Service setup
apiVersion: v1
kind: Service
metadata:
  name: semantic-service
spec:
  ports:
    - port: 50049
  selector:
    app: semantic
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: semantic-service
spec:
  selector:
    matchLabels:
      app: semantic
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: semantic
    spec:
      containers:
      - name: semantic-service
        image: something/semantic-service:v2

As @David Maze wrote in the comment, you can consider using canary where it is possible distinguish deployments of different releases or configurations of the same component with multiple labels and then track these labels and point to different releases, more information about Canary deployments can be find here . Another way how to achieve your goal can be Blue/Green deployment in case if you want to use two different environments identical as possible with a comprehensive way to switch between Blue/Green environments and rollback deployments at any moment of time.

Answering the second question depends on what kind of error a given image contains and how Kubernetes identifies this issue in the Pod, as maxUnavailable: 1 parameter states maximum number of Pods that can be unavailable during update. In the process of Deployment update within a cluster deployment controller creates a new Pod and then delete the old one assuming that the number of available Pods matches rollingUpdate strategy parameters.

Additionally, Kubernetes uses liveness/readiness probes to check whether the Pod is ready (alive) during deployment update and leave the old Pod running until probes have been successful on the new replica. I would suggest checking probes to identify the status of the Pods when deployment tries rolling out updates across you cluster Pods.

Regarding question 1:

I have 1 node with 3 pods. I want to rollout a new image in 1 of the three pods and the other 2 pods stay with the old image. Is it possible?

Answer:
Change your maxSurge in your strategy to 0 :

replicas: 3
strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1 <------ From the 3 replicas - 1 can be unavailable
      maxSurge: 0       <------ You can't have more then the 3 replicas of pods at a time

Regarding question 2:

I tried rolling out a new image that contains error and I already define the maxUnavailable. But kubernetes still rollout all pods. I thought kubernetes will stop rolling out the whole pods, once kubernetes discover an error in the first pod. Do we need to manually stop the rollout?

A ) In order for kubernetes to stop rolling out the whole pods - Use minReadySeconds to specify how much time the pod that was created should be considered ready (use liveness / readiness probes like @Nick_Kh suggested).
If one of the probes had failed before the interval of minReadySeconds has finished then the all rollout will be blocked.

So with a combination of maxSurge = 0 and the setup of minReadySeconds and liveness / readiness probes you can achieve your desired state: 3 pods: 2 with the old image and 1 pod with the new image .

B ) In case of A - you don't need to stop the rollout manually.

But in cases when you will have to do that, you can run:

$ kubectl rollout pause deployment <name>

Debug the non functioning pods and take the relevant action.

If you decide to revert the rollout you can run:

$ kubectl rollout undo deployment <name> --to-revision=1

(View revisions with: $ kubectl rollout history deployment <name> ).

Notice that after you pause d the rollout you need to resume it with:

$ kubectl rollout resume deployment <name>

Even if you decide to undo and return to previous revision.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM