According to the answers to another question in stackoverflow ( how to kill (or avoid) zombie processes with subprocess module ) one can avoid zombie processes by using the command subprocess.Popen.wait()
.
However, when I run the following function perform_sth
inside my script a several thousand times, the memory usage of each individual process tends to increase:
For example, the first process only needs 7 MB, but nr. 1000 already 500 MB, until in the end more than 8 GB are used and I have to kill the whole Python script. The process should always use more or less the same amount of memory.
Probably I have a flaw in my function and need to additionally kill the processes?
My code is:
def perform_sth(arg1, arg2):
import subprocess
sth_cline = ["sth", "-asequence=%s"%arg1, "-bsequence=%s"]
process = subprocess.Popen(
sth_cline,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE
)
process.wait()
return
Do not use stdout=PIPE
if you don't read from the pipe.
Your child process is not zombie (a zombie is a dead process; it needs only a tiny amount of memory, to store its exit status in the process table). Your child process is alive (that is why it is capable of consuming gigabytes of memory).
The OS pipe buffer is probably full and the child process is blocked while trying to write to the pipe. Your parent should drain the buffer by reading from the pipe to allow the child to continue but the parent waits for process.wait()
to return forever (a deadlock).
If you don't need the output, use stdout=subprocess.DEVNULL
instead. Or see How to hide output of subprocess in Python 2.7
#!/usr/bin/env python
from subprocess import check_call, DEVNULL, STDOUT
check_call(["sth", "arg 1", "arg2"], stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.