[英]Can I use a multiprocessing Queue in a function called by Pool.imap?
I'm using python 2.7, and trying to run some CPU heavy tasks in their own processes.我正在使用 python 2.7,并尝试在自己的进程中运行一些 CPU 繁重的任务。 I would like to be able to send messages back to the parent process to keep it informed of the current status of the process.我希望能够将消息发送回父进程,以使其了解进程的当前状态。 The multiprocessing Queue seems perfect for this but I can't figure out how to get it work.多处理队列似乎很适合这个,但我不知道如何让它工作。
So, this is my basic working example minus the use of a Queue.所以,这是我的基本工作示例,不使用队列。
import multiprocessing as mp
import time
def f(x):
return x*x
def main():
pool = mp.Pool()
results = pool.imap_unordered(f, range(1, 6))
time.sleep(1)
print str(results.next())
pool.close()
pool.join()
if __name__ == '__main__':
main()
I've tried passing the Queue in several ways, and they get the error message "RuntimeError: Queue objects should only be shared between processes through inheritance".我试过以多种方式传递队列,但他们收到错误消息“RuntimeError:队列对象只能通过继承在进程之间共享”。 Here is one of the ways I tried based on an earlier answer I found.这是我根据我发现的较早答案尝试的方法之一。 (I get the same problem trying to use Pool.map_async and Pool.imap) (我在尝试使用 Pool.map_async 和 Pool.imap 时遇到同样的问题)
import multiprocessing as mp
import time
def f(args):
x = args[0]
q = args[1]
q.put(str(x))
time.sleep(0.1)
return x*x
def main():
q = mp.Queue()
pool = mp.Pool()
results = pool.imap_unordered(f, ([i, q] for i in range(1, 6)))
print str(q.get())
pool.close()
pool.join()
if __name__ == '__main__':
main()
Finally, the 0 fitness approach (make it global) doesn't generate any messages, it just locks up.最后,0 适应度方法(使其全局化)不会生成任何消息,它只是锁定。
import multiprocessing as mp
import time
q = mp.Queue()
def f(x):
q.put(str(x))
return x*x
def main():
pool = mp.Pool()
results = pool.imap_unordered(f, range(1, 6))
time.sleep(1)
print q.get()
pool.close()
pool.join()
if __name__ == '__main__':
main()
I'm aware that it will probably work with multiprocessing.Process directly and that there are other libraries to accomplish this, but I hate to back away from the standard library functions that are a great fit until I'm sure it's not just my lack of knowledge keeping me from being able to exploit them.我知道它可能会直接与 multiprocessing.Process 一起使用,并且还有其他库可以实现这一点,但我不想放弃非常适合的标准库函数,直到我确定这不仅仅是我所缺少的知识使我无法利用它们。
Thanks.谢谢。
The trick is to pass the Queue as an argument to the initializer. 诀窍是将Queue作为参数传递给初始化程序。 Appears to work with all the Pool dispatch methods. 似乎可以与所有Pool调度方法一起使用。
import multiprocessing as mp
def f(x):
f.q.put('Doing: ' + str(x))
return x*x
def f_init(q):
f.q = q
def main():
jobs = range(1,6)
q = mp.Queue()
p = mp.Pool(None, f_init, [q])
results = p.imap(f, jobs)
p.close()
for i in range(len(jobs)):
print q.get()
print results.next()
if __name__ == '__main__':
main()
With fork
start method (ie, on Unix platforms), you do NOT need to use that initializer trick in the top answer使用fork
启动方法(即,在 Unix 平台上),您不需要在最佳答案中使用该初始化程序技巧
Just define mp.Queue
as a global variable and it will be correctly inherited by the child processes.只需将mp.Queue
定义为全局变量,它就会被子进程正确继承。
OP's example works fine using Python 3.9.7 on Linux (code slightly adjusted): OP 的示例在 Linux 上使用 Python 3.9.7 运行良好(代码略有调整):
import multiprocessing as mp
import time
q = mp.Queue()
def f(x):
q.put(str(x))
return x * x
def main():
pool = mp.Pool(5)
pool.imap_unordered(f, range(1, 6))
time.sleep(1)
for _ in range(1, 6):
print(q.get())
pool.close()
pool.join()
if __name__ == '__main__':
main()
Output: Output:
2
1
3
4
5
It's been 12 years, but I'd like to make sure any Linux user who come across this question knows the top answer's trick is only needed if you cannot use fork已经 12 年了,但我想确保遇到这个问题的任何 Linux 用户都知道只有在您不能使用 fork 时才需要最佳答案的技巧
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.