简体   繁体   中英

How to recycle pods in Kubernetes

I want my pods to be gracefully recycled from my deployments after certain period of time such as every week or month. I know I can add a cron job for that if I know the Kubernetes command.

The question is what is the best approach to do this in Kubernetes. Which command will let me achieve this goal?

Thank you very much for helping me out on this.

You should be managing your Pods via a higher-level controller like a Deployment or a StatefulSet. If you do, and you change any detail of the embedded pod spec, the Deployment/StatefulSet/... will restart all of your pods for you. Probably the most minimal way to do this is to add an annotation to the pod spec that says when it was last deployed:

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      annotations:
        deployed-at: 20181222

There is probably a kubectl patch one-liner to do this; if you're using a deployment manager like then you can just pass in the current date as a "value" (configuration field) and have it injected for you.

If you want to think bigger, though: the various base images routinely have security updates and minor bug fixes, and if you docker pull ubuntu:18.04 once a month or so you'll get these updates. If you actively know you want to restart your pods every month anyways, and you have a good CI/CD pipeline set up, consider setting up a scheduled job in your Jenkins or whatever that rebuilds and redeploys everything, even if there are no changes in the underlying source tree. That will cause the image: to get updated, which will cause all of the pods to be destroyed and recreated, and you'll always be reasonably up-to-date on security updates.

As the OP rayhan has found out , and as commented in kubernetes/kubernetes issue 13488 , a kubectl patch of an environment variable is enough.

But... K8s 1.15 will bring kubectl rollout restart ... that is when PR 77423 is accepted and merged.

kubectl rollout restart now works for daemonsets and statefulsets.

If you need to manually restart Pods manually you could run

'kubectl get pods|grep somename|awk '{print $1}' | xargs -i sh -c 'kubectl delete pod -o name {} && sleep 4'

on a timer-based job (eg from your CI system) as suggested by KIVagant in https://github.com/kubernetes/kubernetes/issues/13488#issuecomment-372456851

That GitHub thread reveals there is currently no single best approach and people are suggesting different things. I mention that one as it is closest to your suggestion and is a simple solution for if you do have to do it. What is generally agreed is that you should try to avoid restart jobs and use probes to ensure unhealthy pods are automatically restarted.

Periodic upgrades (as opposed to restarts) are perfectly good to do, especially as rolling upgrades. But if you do this then be careful that all the upgrading doesn't mask problems. If you have Pods with memory leaks or that exhaust connection pools when left running for long periods then you want to try to get the unhealthy Pods to report themselves as unhealthy - both because they can be automatically restarted and because it will help you monitor for code problems and address them.

You never recycle pods manually , that is a clear anti-pattern of using kuberentes.

Options:

  • Use the declrative format with kubectl apply -f --prune

  • Use a CI/CD tool like Gitlab or Spinakar

  • Use Ksonnet

  • Use Knative

  • Write your own CI/CD tool that automates it

So far I have found that the following one line command works fine for my purpose. I'm running it from Jenkins after a successful build.

kubectl patch deployment {deployment_name} -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM