简体   繁体   English

Kubernetes Rollout 在新 Pod 未完全准备好时丢弃旧 Pod

[英]Kubernetes Rollout Drop Old Pods When New Pods Are Not Fully Ready

I'm using the kubectl rollout command to update my deployment.我正在使用kubectl rollout命令来更新我的部署。 But since my project is a NodeJS Project.但是由于我的项目是 NodeJS 项目。 The npm run start will take some take(a few seconds before the application is actually running.) But Kubernetes will drop the old pods immediately after the npm run start is executed. npm run start需要一些时间(在应用程序实际运行前几秒钟。)但是 Kubernetes 将在npm run start执行后立即删除旧 pod。

For example,例如,

kubectl logs -f my-app

> my app start
> nest start

The Kubernetes will drop the old pods now. Kubernetes 现在将丢弃旧吊舱。 However, it will take another 10 seconds until然而,这将需要另外 10 秒,直到

Application is running on: http://[::1]:5274

which means my service is actually up.这意味着我的服务实际上已经启动。

I'd like to know whether there is a way to modify this like waiting some more time before kubernetes drop the old pods.我想知道是否有办法修改这个,比如在 kubernetes 放弃旧豆荚之前再等一段时间。

My docker file:我的 docker 文件:

FROM node:14 AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY prisma ./prisma/
COPY protos ./protos/
COPY tsconfig.build.json ./
COPY tsconfig.json ./
# Install app dependencies
RUN npm install
RUN export NODE_OPTIONS=--max_old_space_size=16384
RUN npm run build
COPY . .
# FROM node:14
# COPY --from=builder /app/node_modules ./node_modules
# COPY --from=builder /app/package*.json ./
# COPY --from=builder /app/dist ./dist
# COPY --from=builder /app/protos ./protos
EXPOSE 5273
CMD ["npm", "run", "start"]

Spec for my kubernetes yaml file:我的 kubernetes yaml 文件的规格:

spec:
  replicas: 4
  selector:
    matchLabels:
      app: app
  template:
    metadata:
      labels:
        app: app
    spec:
      containers:
      - name: app
        image: image
        imagePullPolicy: Always
        resources:
          limits:
            memory: "8Gi"
            cpu: "10"
          requests:
            memory: "8Gi"
            cpu: "10"
        livenessProbe:
          httpGet:
            path: /api/Health
            port: 5274
          initialDelaySeconds: 180
          periodSeconds: 80
          timeoutSeconds: 20
          failureThreshold: 2
        ports:
        - containerPort: 5274
        - containerPort: 5900

Use a startup probe on your container.在您的容器上使用启动探针。 https://docs.openshift.com/container-platform/4.11/applications/application-health.html . https://docs.openshift.com/container-platform/4.11/applications/application-health.html Pods don't count as "ready" until all of their containers have passed their startup (and readiness) checks.在所有容器都通过启动(和就绪)检查之前,Pod 不算“就绪”。

And during a deployment the scheduler counts non-ready pods as "unavailable" for things like the "maxUnavailable" setting of the deployment.在部署期间,调度程序将未就绪的 pod 计为“不可用”,例如部署的“maxUnavailable”设置。 Thus the scheduler won't keep shutting down working pods until new pods are ready for traffic.因此,调度程序不会继续关闭工作中的 pod,直到新的 pod 为流量做好准备。 ( https://docs.openshift.com/container-platform/4.11/applications/deployments/deployment-strategies.html ) https://docs.openshift.com/container-platform/4.11/applications/deployments/deployment-strategies.html

As an additional benefit, services won't route traffic to non-ready pods, so they won't receive any traffic until the containers have passed their startup probes.另外一个好处是,服务不会将流量路由到未就绪的 pod,因此在容器通过启动探测之前,它们不会收到任何流量。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 kubernetes上的Pod中的localhost - localhost in pods on kubernetes Kubernetes重新启动Pod - Kubernetes restarting pods 如何在Kubernetes集群上获取所有正在运行的POD - How to get all running PODs on Kubernetes cluster 如果各小兵加入后,Kubernetes是否会将Pod重新安排到其他小兵? - Does Kubernetes reschedule pods to other minions if the minions join latter? 在多个kubernetes pod / instances中处理Redis KUE作业 - Process Redis KUE jobs within multiple kubernetes pods/instances 无法使用 AWS 创建签名的 url 并且在 kubernetes pod 上启用了 IRSA - Unable to create a signed url using AWS and with IRSA enabled on kubernetes pods 在Kubernetes集群中的所有Pod中共享express.js应用的公共目录 - Sharing the public directory of an express.js app across all pods in a Kubernetes cluster 如何从外部 node.js 脚本获取 GKE 上 Kubernetes pod 的使用指标 - How to get usage metrics for Kubernetes pods on GKE from external node.js script Kubernetes 新 pod 创建,旧 pod 被删除 - Kubernetes new pod created, old deleted 由于节点应用程序 node::http2::Http2Session::ConsumeHTTP2Data(),Kubernetes pod 每 10 分钟崩溃一次 - Kubernetes pods are crashing every 10 minutes due to node app, node::http2::Http2Session::ConsumeHTTP2Data()
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM