简体   繁体   English

在 multiprocessing.Pool.apply_async function 中使用 multiprocessing.Pipe() 时发生死锁

[英]Deadlock occurs when using multiprocessing.Pipe() in multiprocessing.Pool.apply_async function

I try to use multiprocessing.Pipe() as communication tool in multi processes.我尝试在多进程中使用multiprocessing.Pipe()作为通信工具。 But when I pass pipe into the pool.apply_async() as a parameter, there is a deadlock problem.Why?但是当我将 pipe 作为参数传入pool.apply_async()时,出现了死锁问题。为什么?

The code and output are:代码和output为:

# coding=utf-8
from multiprocess import pool
from multiprocessing import Pipe, Pool, set_start_method, get_context, Queue, Manager, Process
import time


def worker_process(name, _out_pipe, _in_pipe):

    # _out_pipe.close()

    for x in range(10):
        _in_pipe.send(name + ':' + str(x))
        print(name + ' send value :' + str(x))
        time.sleep(0.1)

    # _in_pipe.close()

if __name__ == '__main__':
    set_start_method('spawn')
    print(get_context())
    # with pool.Pool() as pool:
    with Pool() as pool:
        pool.apply_async(worker_process, ('son_p1', out_pipe, in_pipe))
        pool.apply_async(worker_process, ('son_p2', out_pipe, in_pipe))
        pool.apply_async(worker_process, ('son_p3', out_pipe, in_pipe))
        # pool.apply(worker_process, ('son_p1', out_pipe, in_pipe))
        # pool.apply(worker_process, ('son_p2', out_pipe, in_pipe))
        # pool.apply(worker_process, ('son_p3', out_pipe, in_pipe))
        pool.close()
        pool.join()

    while out_pipe.poll():
        print(out_pipe.recv())

    # in_pipe.close()
    # out_pipe.close()

Process ForkPoolWorker-2:
Process ForkPoolWorker-5:
Process ForkPoolWorker-1:
Process ForkPoolWorker-6:
Process ForkPoolWorker-8:
Process ForkPoolWorker-9:
Process ForkPoolWorker-7:
Process ForkPoolWorker-4:
Traceback (most recent call last):
  File "/Users/zhaolong/PycharmProjects/pipEnvGrpc/pipe_example.py", line 34, in <module>
    pool.join()
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/pool.py", line 662, in join
    self._worker_handler.join()
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/threading.py", line 1011, in join
    self._wait_for_tstate_lock()
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/threading.py", line 1027, in _wait_for_tstate_lock
    elif lock.acquire(block, timeout):

But when I replace multiprocessing.Pool() to multiprocess.pool.Pool() or multiprocessing.Pool().apply_async() to multiprocessing.Pool().apply() , the program will run normally.Why?但是当我将multiprocessing.Pool()替换为multiprocess.pool.Pool()multiprocessing.Pool().apply_async()multiprocessing.Pool().apply()时,程序会正常运行。为什么?

Try using spawn to create new processes instead of fork as per this blog post .根据此博客文章,尝试使用 spawn 创建新进程而不是 fork。

from multiprocessing import set_start_method

if __name__ == '__main__':
    set_start_method("spawn")
    print(get_context())

Additionally, try to terminate your pool using finally.此外,尝试使用 finally 终止您的池。 This has appeared as an issue elsewhere .这在其他地方已经作为一个问题出现了。

try:
    with Pool() as pool:
        pool.apply_async(worker_process, ('son_p1', out_pipe, in_pipe))
        pool.apply_async(worker_process, ('son_p2', out_pipe, in_pipe))
        pool.apply_async(worker_process, ('son_p3', out_pipe, in_pipe))
        pool.close()
        pool.join()

    while out_pipe.poll():
        print(out_pipe.recv())
finally:
    pool.terminate()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM