简体   繁体   English

在Python 3.6及更高版本中尽早结束进程

[英]End a Process early in Python 3.6+

I've read that it's considered bad practice to kill a thread. 我读过,杀死线程被认为是不好的做法。 ( Is there any way to kill a Thread? ) There are a LOT of answers there, and I'm wondering if even using a thread in the first place is the right answer for me. 有什么方法可以杀死线程吗? )那里有很多答案,我想知道即使最初使用线程对我来说也是正确的答案。

I have a bunch multiprocessing.Processes. 我有一堆multiprocessing.Processes。 Essentially, each Process is doing this: 本质上,每个流程都在这样做:

while some_condition:
    result = self.function_to_execute(i, **kwargs_i)
    # outQ is a multiprocessing.queue shared between all Processes
    self.outQ.put(Result(i, result))

Problem is... I need a way to interrupt function_to_execute , but can't modify the function itself. 问题是...我需要一种方法来中断function_to_execute ,但是不能修改函数本身。 Initially, I was thinking simply process.terminate() , but that appears to be unsafe with multiprocessing.queue. 最初,我只是在考虑process.terminate() ,但是对于multiprocessing.queue来说似乎是不安全的。

Most likely (but not guaranteed), if I need to kill a thread, the 'main' program is going to be done soon. 如果我需要杀死一个线程,很可能(但不能保证),“主”程序将很快完成。 Is my safest option to do something like this? 我最安全的选择是这样做吗? Or perhaps there is a more elegant solution than using a thread in the first place? 还是有比最初使用线程更优雅的解决方案?

def thread_task():
    while some_condition:
        result = self.function_to_execute(i, **kwargs_i)
        if (this_thread_is_not_daemonized):
            self.outQ.put(Result(i, result))

t = Thread(target=thread_task)
t.start()

if end_early:
    t.daemon = True

I believe the end result of this is that the Process that spawned the thread will continue to waste CPU cycles on a task I no longer care about the output for, but if the main program finishes, it'll clean up all my memory nicely. 我相信这样做的最终结果是,生成线程的进程将继续在我不再关心输出的任务上浪费CPU周期,但是如果主程序完成,它将很好地清理我的所有内存。

The main problem with daemonizing a thread is that the main program could potentially continue for 30+ minutes even when I don't care about the output of that thread anymore. 守护线程的主要问题是,即使我不再关心该线程的输出,主程序也可能会持续30分钟以上。

From the threading docs: 从线程文档:

If you want your threads to stop gracefully, make them non-daemonic and use a suitable signalling mechanism such as an Event 如果您希望线程正常停止,请使其成为非守护进程,并使用适当的信令机制,例如事件

Here is a contrived example of what I was thinking - no idea if it mimics what you are doing or can be adapted for your situation. 这是我在想什么的一个精心设计的示例-不知道它是否模仿您在做什么或是否可以适应您的情况。 Another caveat: I've never written any real concurrent code. 另一个警告:我从未编写过任何真正的并发代码。

Create an Event object in the main process and pass it all the way to the thread. 在主进程中创建一个Event对象,并将其一直传递线程。 Design the thread so that it loops until the Event object is set. 设计线程,使其循环直到设置了Event对象。 Once you don't need the processing anymore SET the Event object in the main process. 一旦不再需要处理,请在主进程中设置Event对象。 No need to modify the function being run in the thread. 无需修改正在线程中运行的功能。

from multiprocessing import Process, Queue, Event
from threading import Thread
import time, random, os

def f_to_run():
    time.sleep(.2)
    return random.randint(1,10)

class T(Thread):
    def __init__(self, evt,q, func, parent):
        self.evt = evt
        self.q = q
        self.func = func
        self.parent = parent
        super().__init__()
    def run(self):
        while not self.evt.is_set():
            n = self.func()
            self.q.put(f'PID {self.parent}-{self.name}: {n}')

def f(T,evt,q,func):
    pid = os.getpid()
    t = T(evt,q,func,pid)
    t.start()
    t.join()
    q.put(f'PID {pid}-{t.name} is alive - {t.is_alive()}')
    q.put(f'PID {pid}:DONE')
    return 'foo done'

if __name__ == '__main__':
    results = []
    q = Queue()
    evt = Event()
    # two processes each with one thread
    p= Process(target=f, args=(T, evt, q, f_to_run))
    p1 = Process(target=f, args=(T, evt, q, f_to_run))
    p.start()
    p1.start()

    while len(results) < 40:
        results.append(q.get())
        print('.',end='')
    print('')
    evt.set()
    p.join()
    p1.join()
    while not q.empty():
        results.append(q.get_nowait())
    for thing in results:
        print(thing)

I initially tried to use threading.Event but the multiprocessing module complained that it couldn't be pickled. 我最初尝试使用threading.Event但是多处理模块抱怨无法腌制。 I was actually surprised that the multiprocessing.Queue and multiprocessing.Event worked AND could be accessed by the thread. 实际上,令multiprocessing.Queuemultiprocessing.Event工作正常且可以被线程访问的感到惊讶。


Not sure why I started with a Thread subclass - I think I thought it would be easier to control/specify what happens in it's run method. 不知道为什么Thread子类开始-我认为我认为控制/指定run方法中发生的事情会更容易。 But it can be done with a function also. 但是也可以用一个函数来完成。

from multiprocessing import Process, Queue, Event
from threading import Thread
import time, random

def f_to_run():
    time.sleep(.2)
    return random.randint(1,10)

def t1(evt,q, func):
    while not evt.is_set():
        n = func()
        q.put(n)

def g(t1,evt,q,func):
    t = Thread(target=t1,args=(evt,q,func))
    t.start()
    t.join()
    q.put(f'{t.name} is alive - {t.is_alive()}')
    return 'foo'

if __name__ == '__main__':

    q = Queue()
    evt = Event()
    p= Process(target=g, args=(t1, evt, q, f_to_run))
    p.start()
    time.sleep(5)
    evt.set()
    p.join()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM