[英]shared memory between processes
I'm playing around with the multiprocessing module in python and trying to parallelize an algorithm that loops through an list with a different increment value each time (modification of the Sieve of Eratosthenes algorithm).我正在使用 python 中的多处理模块,并尝试并行化一个算法,该算法每次都以不同的增量值循环遍历一个列表(修改埃拉托色尼筛算法)。 Therefore, I want to have a shared list between all of the processes so that all the processes are modifying the same list.
因此,我希望在所有进程之间有一个共享列表,以便所有进程都在修改同一个列表。 I've tried with the
multiprocessing.Array
function, but when I reach the end of the program the array is still unmodified and still contains all 0's (the value that I initialized it to).我已经尝试过使用
multiprocessing.Array
function,但是当我到达程序末尾时,数组仍然没有被修改并且仍然包含所有 0(我初始化它的值)。
import multiprocessing
import math
num_cores = multiprocessing.cpu_count()
lower = 0
mark = None
def mark_array(k):
global mark
index = (-(-lower//k)*k)-lower
for i in range(index, len(mark), k):
mark[i] = 1
def sieve(upper_bound, lower_bound):
size = upper_bound - lower_bound + 1
global mark
mark = multiprocessing.Array('i', size, lock=False)
for i in range(size):
mark[i] = 0
klimit = int(math.sqrt(upper_bound)) + 1
global lower
lower = lower_bound
if __name__ == '__main__':
pool = multiprocessing.Pool(processes=num_cores)
inputs = list(range(2, klimit+1))
pool.map(mark_array, inputs)
pool.close()
pool.join()
result = []
for i in range(size):
result.append(mark[i])
print(result)
sieve(200,100)
Pardon the code.原谅代码。 It's a bit messy, but I'm just trying to get the shared memory to work before I clean it up.
这有点乱,但我只是想在清理它之前让共享的 memory 工作。
EDIT: Ok, so I tried the exact same code on a linux machine and there I get my expected output.编辑:好的,所以我在 linux 机器上尝试了完全相同的代码,我得到了我预期的 output。 However, running the same code in VS code on a Windows machine does not.
但是,在 Windows 机器上的 VS 代码中运行相同的代码不会。 Any idea why?
知道为什么吗?
EDIT#2: This seems to be a Windows specific issue as the Windows OS handles processes differently than Linux.编辑#2:这似乎是 Windows 特定问题,因为 Windows 操作系统处理进程的方式与 Linux 不同。 If this is the case, any idea how to solve it?
如果是这种情况,知道如何解决吗?
You could try to use multiprocessing.Manager for your task:您可以尝试使用 multiprocessing.Manager 来完成您的任务:
import multiprocessing
import math
from functools import partial
num_cores = multiprocessing.cpu_count()
lower = 0
def mark_array(mark, k):
index = (-(-lower // k) * k) - lower
for i in range(index, len(mark), k):
mark[i] = 1
def sieve(upper_bound, lower_bound):
size = upper_bound - lower_bound + 1
klimit = int(math.sqrt(upper_bound)) + 1
global lower
lower = lower_bound
if __name__ == '__main__':
pool = multiprocessing.Pool(processes=num_cores)
with multiprocessing.Manager() as manager:
mark = manager.list(range(size))
for i in range(size):
mark[i] = 0
inputs = list(range(2, klimit + 1))
foo = partial(mark_array, mark)
pool.map(foo, inputs)
pool.close()
pool.join()
result = []
for i in range(size):
result.append(mark[i])
print(result)
sieve(200, 100)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.