简体   繁体   English

如何在不杀死原始Pod的情况下改变k8s的Pod倾斜度?

[英]How to change k8s's pods limts without killing the original pod?

Requst:limits of a pod may be set to low at the beginning, to make full use of node's resource, we need to set the limits higher. Requst:Pod的限制可能在开始时设置为较低,为了充分利用节点的资源,我们需要将限制设置得更高。 However, when the resource of node is not enough, to make the node's still work well, we need to set the limits lower. 但是,当节点的资源不足时,为了使节点仍然可以正常工作,我们需要将限制设置得较低。 It is better not to kill the pod, because it may influence the cluster. 最好不要杀死该吊舱,因为它可能会影响群集。

Background:I am currently a beginner in k8s and docker, my mentor give me this requests. 背景:我目前是k8s和docker的初学者,我的导师给了我这个要求。 Can this requests fullfill normaly? 这可以正常要求吗? Or is it better way to solve this kind of problem? 还是解决此类问题的更好方法? Thanks for your helps! 感谢您的帮助! All I tried:I am trying to do by editing the Cgroups, but I can only do this in a container, so may be container should be use in privileged mode. 我尝试过的所有操作:我试图通过编辑Cgroups进行操作,但是我只能在容器中执行此操作,因此可能容器应在特权模式下使用。

I expect a resonable plan for this requests. 我期望针对此请求制定合理的计划。 Thanks... 谢谢...

I do no think this is possible, there is an old issue tracking such thing on the kubernetes github ( https://github.com/kubernetes/kubernetes/issues/9043 ) from 2015 and it is open. 我不认为这是可能的,从2015年起,在kubernetes github( https://github.com/kubernetes/kubernetes/issues/9043 )上跟踪此类问题存在一个老问题,并且它是开放的。

Also, you should not rely on pod not being recreated while using kubernetes. 此外,在使用kubernetes时,您不应该依赖未重新创建的pod。 Applications should be able to stateless to a point where if it dies in mid of a process, it could handle this failure and start it from the begin once it is started again. 应用程序应该能够无状态地运行到某个状态,如果该状态在某个进程中死亡,则它可以处理此故障,并在重新启动后从头开始启动。

I understand the idea behind trying to optimize the resource usage to it maximum but you should be also worried about a reliable process. 我了解尝试最大程度地优化资源使用量的想法,但您也应该担心过程可靠。

I think you should check out the Kubernetes' Vertical Pod Autoscaler, as it automates the resources of a pod depending on its usage. 我认为您应该检查Kubernetes的Vertical Pod Autoscaler,因为它会根据使用情况自动使Pod的资源自动运行。 Maybe that could be an alternative: https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler 也许这可能是另一种选择: https : //github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler

您必须找到在容器中运行的容器ID,并在下面运行以下命令以增加资源。

docker update --cpu-shares NewValue -m NewValue DockerContainerID

The clue is you want to change limits without killing the pod . 提示是您想要更改限制而不会杀死Pod

This is not the way Kubernetes works, as Markus W Mahlberg explained in his comment above. 正如Markus W Mahlberg在上面的评论中解释的那样,这不是Kubernetes的工作方式。 In Kubernetes there is no "hot plug CPU/memory" or "live migration" facilities the convenient hypervisors provide. 在Kubernetes中,没有方便的管理程序提供的“热插拔CPU /内存”或“实时迁移”功能。 Kubernetes treats pods as ephemeral instances and does not take care about keeping them running. Kubernetes将Pod视为短暂的实例,并不关心保持它们的运行。 Whether you need to change resource limits for the application, change the app configuration, install app updates or repair misbehaving application, the "kill-and-recreate" approach is applied to pods. 无论您是需要更改应用程序的资源限制,更改应用程序配置,安装应用程序更新还是修复行为异常的应用程序, “杀死并重新创建”方法都将应用于吊舱。

Unfortunately, the solutions suggested here will not work for you: 不幸的是,此处建议的解决方案不适用于您:

  • Increasing limits for the running container within the pod ( docker update command ) will lead to breaching the pod limits and killing the pod by Kubernetes. 吊舱内正在运行的容器的限制增加( docker update命令)将导致违反吊舱限制并被Kubernetes杀死吊舱。
  • Vertical Pod Autoscaler is part of Kubernetes project and relies on the "kill-and-recreate" approach as well. Vertical Pod Autoscaler是Kubernetes项目的一部分,也依赖于“ kill-and-recreate”方法。

If you really need to keep the containers running and managing allocated resource limits for them "on-the-fly", perhaps Kubernetes is not suitable solution in this particular case. 如果您确实需要让容器保持运行并即时管理分配的资源限制,那么在这种特殊情况下,Kubernetes可能不是合适的解决方案。 Probably you should consider using pure Docker or a VM-based solution. 也许您应该考虑使用纯Docker或基于VM的解决方案。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM