简体   繁体   English

Python多重处理-并行处理

[英]Python Multiprocessing - Parrallel processes

I'm new to Python multiprocessing, and I'm trying to implement some parallel calculations. 我是Python多重处理的新手,我正在尝试实现一些并行计算。 I've got the info that this: 我得到的信息是:

#M is an integer, contains the number of processes I'd like to launch.
results = []
for i in range(0, M):
        p = Process(target=processchild, args=(data[i],q))
        p.start()
        result.append(q.get())
        p.join()

is still sequential, because .join() causes the loop to wait until p is finished before starting the next one. 仍然是顺序的,因为.join()会使循环等待直到p完成之后才开始下一个循环。 I've read here in an answer, that 我在这里读了一个答案

You'll either want to join your processes individually outside of your for loop (eg, by storing them in a list and then iterating over it)... 您要么想在for循环外单独加入您的进程(例如,通过将它们存储在列表中,然后对其进行迭代)...

So if I'd modify my code to 因此,如果我将代码修改为

results = []
for i in range(0, M):
        processes[i] = Process(target=processchild, args=(data[i],q))
        processes[i].start()
        result.append(q.get())

for i in range(0, M):
        processes[i].join()

Would it actually run in parallel now? 它现在是否可以并行运行? If not, how can I modify my code to work that way? 如果没有,我如何修改我的代码以这种方式工作? I've read the solution using numpy.Pool and apply_async posted as an answer to the question I previously linked, so I'm mostly interested in a solution that doesn't use these. 我已使用numpy.Poolapply_async阅读该解决方案,作为对先前链接的问题的解答,因此,我对不使用这些解决方案的解决方案最感兴趣。

Yes, this will run in parallel. 是的,这将并行运行。

All processes are started before you try joining one, so this will not block after the first process. 在尝试加入一个进程之前,所有进程都已启动,因此在第一个进程之后该进程不会阻塞。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM