[英]How to shutdown process with event loop and executor
Consider the following program. 考虑以下程序。
import asyncio
import multiprocessing
from multiprocessing import Queue
from concurrent.futures.thread import ThreadPoolExecutor
import sys
def main():
executor = ThreadPoolExecutor()
loop = asyncio.get_event_loop()
# comment the following line and the shutdown will work smoothly
asyncio.ensure_future(print_some(executor))
try:
loop.run_forever()
except KeyboardInterrupt:
print("shutting down")
executor.shutdown()
loop.stop()
loop.close()
sys.exit()
async def print_some(executor):
print("Waiting...Hit CTRL+C to abort")
queue = Queue()
loop = asyncio.get_event_loop()
some = await loop.run_in_executor(executor, queue.get)
print(some)
if __name__ == '__main__':
main()
All I want is a graceful shutdown when I hit "CTRL+C". 当我点击“CTRL + C”时,我想要的只是一个优雅的关机。 However, the executor thread seems to prevent that (even though I do call shutdown
) 但是,执行程序线程似乎阻止了(即使我调用shutdown
)
You need to send a poison pill to make the workers stop listening on the queue.get call. 您需要发送一个毒丸,让工作人员停止侦听queue.get调用。 Worker threads in the ThreadPoolExecutor
pool will block Python from exiting if they have active work. ThreadPoolExecutor
池中的工作线程将阻止Python退出,如果它们有活动的工作。 There's a comment in the source code that describes the reasoning for this behavior: 源代码中有一条注释描述了此行为的原因:
# Workers are created as daemon threads. This is done to allow the interpreter
# to exit when there are still idle threads in a ThreadPoolExecutor's thread
# pool (i.e. shutdown() was not called). However, allowing workers to die with
# the interpreter has two undesirable properties:
# - The workers would still be running during interpreter shutdown,
# meaning that they would fail in unpredictable ways.
# - The workers could be killed while evaluating a work item, which could
# be bad if the callable being evaluated has external side-effects e.g.
# writing to a file.
#
# To work around this problem, an exit handler is installed which tells the
# workers to exit when their work queues are empty and then waits until the
# threads finish.
Here's a complete example that exits cleanly: 这是一个干净利落的完整示例:
import asyncio
import multiprocessing
from multiprocessing import Queue
from concurrent.futures.thread import ThreadPoolExecutor
import sys
def main():
executor = ThreadPoolExecutor()
loop = asyncio.get_event_loop()
# comment the following line and the shutdown will work smoothly
fut = asyncio.ensure_future(print_some(executor))
try:
loop.run_forever()
except KeyboardInterrupt:
print("shutting down")
queue.put(None) # Poison pill
loop.run_until_complete(fut)
executor.shutdown()
loop.stop()
loop.close()
async def print_some(executor):
print("Waiting...Hit CTRL+C to abort")
loop = asyncio.get_event_loop()
some = await loop.run_in_executor(executor, queue.get)
print(some)
queue = None
if __name__ == '__main__':
queue = Queue()
main()
The run_until_complete(fut)
call is needed to avoid a warning about a pending task hanging around when the asyncio eventloop exits. 需要run_until_complete(fut)
调用以避免在asyncio eventloop退出时发出挂起待处理任务的警告。 If you don't care about that, you can leave that call out. 如果你不关心这个,你可以把这个电话留下来。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.