简体   繁体   中英

python kill parent process but child process left

While I try to kill a python process, the child process started via os.system won't be terminated at the same time.

Killing child process when parent crashes in python and Python Process won't call atexit (atexit looks like not work with signal)

Does that mean I need to handle this situation by myself? If so, what is the preferred way to do so?

> python main.py
> ps
4792 ttys002    0:00.03 python run.py
4793 ttys002    0:00.03 python loop.py 
> kill -15 4792
> ps 
4793 ttys002    0:00.03 python loop.py

Sample Code:

main.py

import os
os.system('python loop.py')

loop.py

import time

while True:
    time.sleep(1000)

UPDATE1

I did some experiment, and find out a workable version but still confuse about the logic.

import os
import sys
import signal
import subprocess


def sigterm_handler(_signo, _stack_frame):
    # it raises SystemExit(0):
    print 'go die'
    sys.exit(0)


signal.signal(signal.SIGTERM, sigterm_handler)

try:
    # os.system('python loop.py') 
    # use os.system won't work, it will even ignore the SIGTERM entirely for some reason 
    subprocess.call(['python', 'loop.py'])
except:
    os.killpg(0, signal.SIGKILL)

kill -15 4792 sends SIGTERM to run.py in your example -- it sends nothing to loop.py (or its parent shell). SIGTERM is not propagated to other processes in the process tree by default.

os.system('python loop.py') starts at least two processes the shell and python process. You don't need it; use subprocess.check_call() , to run a single child process without the implicit shell. btw, if your subprocess is a Python script; consider importing it and running corresponding functions instead .

os.killpg(0, SIGKILL) sends SIGKILL signal to the current process group. A shell creates a new process group (a job) for each pipeline and therefore the os.killpg() in the parent has no effect on the child (see the update). See How to terminate a python subprocess launched with shell=True .

#!/usr/bin/env python
import subprocess
import sys

try:
    p = subprocess.Popen([executable, 'loop'])
except EnvironmentError as e: # 
    sys.exit('failed to start %r, reason: %s' % (executable, e))
else:
    try: # wait for the child process to finish
        p.wait()
    except KeyboardInterrupt: # on Ctrl+C (SIGINT)
        #NOTE: the shell sends SIGINT (on CtrL+C) to the executable itself if
        #  the child process is in the same foreground process group as its parent 
        sys.exit("interrupted")

Update

It seems os.system(cmd) doesn't create a new process group for cmd :

>>> import os
>>> os.getpgrp()
16180
>>> import sys
>>> cmd = sys.executable + ' -c "import os; print(os.getpgrp())"'
>>> os.system(cmd) #!!! same process group
16180
0
>>> import subprocess
>>> import shlex
>>> subprocess.check_call(shlex.split(cmd))
16180
0
>>> subprocess.check_call(cmd, shell=True)
16180
0
>>> subprocess.check_call(cmd, shell=True, preexec_fn=os.setpgrp) #!!! new
18644
0

and therefore os.system(cmd) in your example should be killed by the os.killpg() call.

Though if I run it in bash; it does create a new process group for each pipeline:

$ python -c "import os; print(os.getpgrp())"
25225
$ python -c "import os; print(os.getpgrp())"
25248

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM