简体   繁体   English

如何回收Kubernetes中的豆荚

[英]How to recycle pods in Kubernetes

I want my pods to be gracefully recycled from my deployments after certain period of time such as every week or month. 我希望我的pod可以在一段时间后(例如每周或每月)从我的部署中优雅地回收。 I know I can add a cron job for that if I know the Kubernetes command. 我知道如果我知道Kubernetes命令,我可以为此添加一个cron作业。

The question is what is the best approach to do this in Kubernetes. 问题是在Kubernetes中做到这一点的最佳方法是什么。 Which command will let me achieve this goal? 哪个命令会让我实现这个目标?

Thank you very much for helping me out on this. 非常感谢你帮助我解决这个问题。

You should be managing your Pods via a higher-level controller like a Deployment or a StatefulSet. 您应该通过更高级别的控制器(如Deployment或StatefulSet)来管理Pod。 If you do, and you change any detail of the embedded pod spec, the Deployment/StatefulSet/... will restart all of your pods for you. 如果您这样做,并且您更改了嵌入式pod规范的任何细节,则Deployment / StatefulSet / ...将为您重新启动所有pod。 Probably the most minimal way to do this is to add an annotation to the pod spec that says when it was last deployed: 可能最简单的方法是在pod规范中添加一个注释,说明上次部署的时间:

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      annotations:
        deployed-at: 20181222

There is probably a kubectl patch one-liner to do this; 可能有一个kubectl patch单行做这个; if you're using a deployment manager like then you can just pass in the current date as a "value" (configuration field) and have it injected for you. 如果您正在使用像这样的部署管理器,那么您可以将当前日期作为“值”(配置字段)传入并为其注入。

If you want to think bigger, though: the various base images routinely have security updates and minor bug fixes, and if you docker pull ubuntu:18.04 once a month or so you'll get these updates. 但是,如果你想要更大,那么:各种基本映像通常都有安全更新和小错误修复,如果你docker pull ubuntu:18.04每月一次左右,你将获得这些更新。 If you actively know you want to restart your pods every month anyways, and you have a good CI/CD pipeline set up, consider setting up a scheduled job in your Jenkins or whatever that rebuilds and redeploys everything, even if there are no changes in the underlying source tree. 如果您确实每个月都想要重新启动pod,并且设置了良好的CI / CD管道,请考虑在Jenkins中设置预定作业或者重建和重新部署所有内容,即使没有任何更改底层的源树。 That will cause the image: to get updated, which will cause all of the pods to be destroyed and recreated, and you'll always be reasonably up-to-date on security updates. 这将导致image:更新,这将导致所有窗格被销毁和重新创建, 并且您将始终合理地了解安全更新。

As the OP rayhan has found out , and as commented in kubernetes/kubernetes issue 13488 , a kubectl patch of an environment variable is enough. 正如OP rayhan已经发现的那样 ,并且如kubernetes/kubernetes issue 13488中所评论的kubernetes/kubernetes ,环境变量的kubectl补丁就足够了。

But... K8s 1.15 will bring kubectl rollout restart ... that is when PR 77423 is accepted and merged. 但是...... K8s 1.15会让kubectl rollout restart ......那就是PR 77423被接受并合并的时候。

kubectl rollout restart now works for daemonsets and statefulsets. kubectl rollout restart现在适用于守护进程和状态集。

If you need to manually restart Pods manually you could run 如果您需要手动手动重启Pod,则可以运行

'kubectl get pods|grep somename|awk '{print $1}' | 'kubectl get pods | grep somename | awk'{print $ 1}'| xargs -i sh -c 'kubectl delete pod -o name {} && sleep 4' xargs -i sh -c'kubectl delete pod -o name {} && sleep 4'

on a timer-based job (eg from your CI system) as suggested by KIVagant in https://github.com/kubernetes/kubernetes/issues/13488#issuecomment-372456851 基于计时器的工作(例如来自您的CI系统),如KIVagant在https://github.com/kubernetes/kubernetes/issues/13488#issuecomment-372456851中所建议的

That GitHub thread reveals there is currently no single best approach and people are suggesting different things. GitHub线程显示目前没有单一的最佳方法,人们提出了不同的建议。 I mention that one as it is closest to your suggestion and is a simple solution for if you do have to do it. 我提到一个,因为它最接近你的建议,并且是一个简单的解决方案,如果你必须这样做。 What is generally agreed is that you should try to avoid restart jobs and use probes to ensure unhealthy pods are automatically restarted. 通常同意的是,您应该尝试避免重新启动作业并使用探针来确保自动重启不健康的pod。

Periodic upgrades (as opposed to restarts) are perfectly good to do, especially as rolling upgrades. 定期升级(而非重启)非常好,特别是滚动升级。 But if you do this then be careful that all the upgrading doesn't mask problems. 但是如果你这样做,那么要小心所有升级都不会掩盖问题。 If you have Pods with memory leaks or that exhaust connection pools when left running for long periods then you want to try to get the unhealthy Pods to report themselves as unhealthy - both because they can be automatically restarted and because it will help you monitor for code problems and address them. 如果你有Pods内存泄漏或者长时间运行后排气连接池,那么你想尝试让不健康的Pod报告自己不健康 - 因为它们可以自动重启,因为它可以帮助你监控代码问题并解决它们。

You never recycle pods manually , that is a clear anti-pattern of using kuberentes. 你永远不会手动回收豆荚,这是使用kuberentes的明显反模式。

Options: 选项:

  • Use the declrative format with kubectl apply -f --prune 使用kubectl apply -f --prune的declrative格式

  • Use a CI/CD tool like Gitlab or Spinakar 使用像Gitlab或Spinakar这样的CI / CD工具

  • Use Ksonnet 使用Ksonnet

  • Use Knative 使用Knative

  • Write your own CI/CD tool that automates it 编写自己的CI / CD工具,使其自动化

So far I have found that the following one line command works fine for my purpose. 到目前为止,我发现以下一行命令适用于我的目的。 I'm running it from Jenkins after a successful build. 我在成功构建后从Jenkins运行它。

kubectl patch deployment {deployment_name} -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM