简体   繁体   中英

Provide CPU time and memory to subprocess

I have been researching over on how to provide Python subprocess it's own time and memory .


import resource
import subprocess


def set_memory_time(seconds):
    limit_virtual_memory(seconds)
    usage_start = resource.getrusage(resource.RUSAGE_CHILDREN)
    print("usage_start ", usage_start)
    try:
        p = subprocess.check_output(
            ['docker exec -it cpp_compiler sh -c "g++ -o Test1 prog1.cpp && ./Test1 < input.txt"'],
            shell=True)
    except Exception as e:
        print(e)
    usage_end = resource.getrusage(resource.RUSAGE_CHILDREN)
    print("usage_end ", usage_end)
    cpu_time = usage_end.ru_utime - usage_start.ru_utime
    print("cpu_time ", cpu_time)


def limit_virtual_memory(seconds):
    max_virtual_memory = 10 * 1024 * 1024  # 10 MB
    usage_start = resource.getrusage(resource.RUSAGE_CHILDREN)
    resource.setrlimit(resource.RLIMIT_AS, (max_virtual_memory, resource.RLIM_INFINITY))
    resource.setrlimit(resource.RLIMIT_CPU, (seconds, usage_start.ru_utime + seconds))

Problem is that resource.setrlimit set limit for the main process and subprocess uses that limit. When limit exceeds it actually kills process as well.

  1. Overall goal that I am trying to achieve here is subprocess.check_output(['docker exec -it cpp_compiler sh -c "g++ -o Test1 prog1.cpp && ./Test1 < input.txt"'] this line should not more resource than allocated.
    Is there way to achieve this in python ?

  2. The problem I am trying to solve here is trying to allocate CPU time and memory limit for user submitted CPP code. CPP code will eventually run on docker container for sandboxing purpose, but want to have limit on the resource it uses.

It would be really helpful if someone can provide input on the problem and potential solution or correction in above code.

Thanks

You can't limit the resource utilization of an arbitrary docker exec process.

Docker uses a client/server model, so when you run docker exec it's just making a request to the Docker daemon. When you try to to setrlimit to limit the subprocess's memory, it only limits the docker exec process itself; but that makes a request to the Docker daemon, which in turn launches a new process in the container namespace. None of these processes are children of each other, and none of them beyond the original docker exec inherit these resource limits.

If instead you launch a new container, you can useDocker's resource limits on the new container. These don't limit the absolute amount of CPU time, but you probably want to limit the runtime of the launched process in any case.

You should generally avoid using the subprocess module to invoke docker commands. Constructing shell commands and consuming their output can be tricky, and if your code isn't perfect, it's very easy to use a shell-injection attack to use the docker command to root the host. Use something like the Docker SDK for Python instead.

So, if you wanted to launch a new container, with a fixed memory limit, and to limit its execution time, you could do that with something like:

import docker
import requests
client = docker.from_env()

container = client.containers.run(
  image='some/image:tag',
  command=['the', 'command', 'to', 'run'],
  detach=True,
  mem_limit=10485760 # 10 MiB
)
try:
  container.wait(timeout=30) # seconds
except requests.exceptions.ReadTimeout:
  # container ran over its time allocation
  container.kill()
  container.wait()
print(container.logs())
container.remove()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM