简体   繁体   English

如何让 Python 进程使用所有 Docker 容器内存而不会被杀死?

[英]How to let a Python process use all Docker container memory without getting killed?

I have a Python process that does some heavy computations with Pandas and such - not my code so I basically don't have much knowledge on this.我有一个 Python 进程,它使用 Pandas 等进行一些繁重的计算 - 不是我的代码,所以我基本上对此知之甚少。

The situation is this Python code used to run perfectly fine on a server with 8GB of RAM maxing all the resources available.情况是这个 Python 代码曾经在具有 8GB RAM 的服务器上运行得非常好,可以最大限度地利用所有可用资源。

We moved this code to Kubernetes and we can't make it run: even increasing the allocated resources up to 40GB, this process is greedy and will inevitably try to get as much as it can until it gets over the container limit and gets killed by Kubernetes.我们将此代码移至 Kubernetes,但无法使其运行:即使将分配的资源增加到 40GB,此过程也是贪婪的,并且将不可避免地尝试获取尽可能多的资源,直到超过容器限制并被杀死库伯斯。

I know this code is probably suboptimal and needs rework on its own.我知道这段代码可能不是最理想的,需要自行返工。

However my question is how to get Docker on Kubernetes mimic what Linux did on the server: give as much as resources as needed by the process without killing it?然而,我的问题是如何让 Kubernetes 上的 Docker 模仿 Linux 在服务器上所做的:在不杀死进程的情况下提供尽可能多的资源?

I found out that running something like this seems to work:我发现运行这样的东西似乎有效:

import resource
import os

if os.path.isfile('/sys/fs/cgroup/memory/memory.limit_in_bytes'):
    with open('/sys/fs/cgroup/memory/memory.limit_in_bytes') as limit:
        mem = int(limit.read())
        resource.setrlimit(resource.RLIMIT_AS, (mem, mem))

This reads the memory limit file from cgroups and set it as both hard and soft limit for the process' max area address space.这会从 cgroups 读取内存限制文件,并将其设置为进程最大区域地址空间的硬限制和软限制。

You can test by runnning something like:您可以通过运行以下内容进行测试:

docker run --it --rm -m 1G --cpus 1 python:rc-alpine

And then trying to allocate 1G of ram before and after running the script above.然后尝试在运行上述脚本之前和之后分配 1G 的 ram。

With the script, you'll get a MemoryError , without it the container will be killed.使用脚本,你会得到一个MemoryError ,没有它容器将被杀死。

Using --oom-kill-disable option with a memory limit works for me (12GB memory) in a Docker container.在 Docker 容器中使用带有内存限制的--oom-kill-disable选项对我(12GB 内存)有效。 Perhaps it applies to Kubernetes as well.也许它也适用于 Kubernetes。

docker run -dp 80:8501 --oom-kill-disable -m 12g <image_name> 

Hence How to mimic "--oom-kill-disable=true" in kuberenetes?因此如何在 kuberenetes 中模仿“--oom-kill-disable=true”?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Python 进程被杀死 - docker、aws 和 conda 问题 - Python process killed - docker, aws and conda issue 如何让子线程处理Python中杀死的主进程或键中断? - How to let child thread deal with main process killed or key interrupt in Python? 在Python中获取Killed消息 - 内存是否存在问题? - getting Killed message in Python — Is memory the issue? 我如何让BASH脚本作为进程运行? 那么即使Python脚本被杀死,BASH脚本也会永远运行? - How can i let the BASH script run as process? So that even the Python script is killed the BASH script runs forever? pip 在 Docker 中被杀 - pip getting killed in Docker 防止python进程因内存不足而被杀死 - Preventing a python process from being killed due to out of memory Python 进程因 256Gb 不足而被终止 memory - Python process being killed due to out of 256Gb memory 如何在Docker容器中使用&#39;keyboard&#39;python模块而不会出现错误? - How do I use the 'keyboard' python module in a docker container without an error? 通过没有足够的 memory 来防止 python3 进程被“杀死” - Prevent python3 process being 'Killed' by not having enough memory Docker 进程被神秘的“Killed”消息杀死 - Docker process killed with cryptic `Killed` message
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM