[英]Multiprocessing: why is a numpy array shared with the child processes, while a list is copied?
[英]Is shared readonly data copied to different processes for multiprocessing?
我的代碼看起來像這樣:
glbl_array = # a 3 Gb array
def my_func( args, def_param = glbl_array):
#do stuff on args and def_param
if __name__ == '__main__':
pool = Pool(processes=4)
pool.map(my_func, range(1000))
有沒有辦法確保(或鼓勵)不同的進程不會獲得 glbl_array 的副本而是共享它。 如果無法停止復制,我將使用 memmapped 數組,但我的訪問模式不是很規律,所以我希望 memmapped 數組更慢。 以上似乎是首先要嘗試的。 這是在 Linux 上。 我只是想從 Stackoverflow 獲得一些建議,不想惹惱系統管理員。 如果第二個參數是像glbl_array.tostring()
這樣的真正不可變對象,您認為這會有所幫助嗎?
您可以很容易地將multiprocessing
中的共享內存內容與 Numpy 一起使用:
import multiprocessing
import ctypes
import numpy as np
shared_array_base = multiprocessing.Array(ctypes.c_double, 10*10)
shared_array = np.ctypeslib.as_array(shared_array_base.get_obj())
shared_array = shared_array.reshape(10, 10)
#-- edited 2015-05-01: the assert check below checks the wrong thing
# with recent versions of Numpy/multiprocessing. That no copy is made
# is indicated by the fact that the program prints the output shown below.
## No copy was made
##assert shared_array.base.base is shared_array_base.get_obj()
# Parallel processing
def my_func(i, def_param=shared_array):
shared_array[i,:] = i
if __name__ == '__main__':
pool = multiprocessing.Pool(processes=4)
pool.map(my_func, range(10))
print shared_array
哪個打印
[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[ 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.]
[ 3. 3. 3. 3. 3. 3. 3. 3. 3. 3.]
[ 4. 4. 4. 4. 4. 4. 4. 4. 4. 4.]
[ 5. 5. 5. 5. 5. 5. 5. 5. 5. 5.]
[ 6. 6. 6. 6. 6. 6. 6. 6. 6. 6.]
[ 7. 7. 7. 7. 7. 7. 7. 7. 7. 7.]
[ 8. 8. 8. 8. 8. 8. 8. 8. 8. 8.]
[ 9. 9. 9. 9. 9. 9. 9. 9. 9. 9.]]
但是,Linux 在fork()
上具有寫時復制語義,因此即使不使用multiprocessing.Array
,數據也不會被復制,除非它被寫入。
以下代碼適用於 Win7 和 Mac(可能在 linux 上,但未經測試)。
import multiprocessing
import ctypes
import numpy as np
#-- edited 2015-05-01: the assert check below checks the wrong thing
# with recent versions of Numpy/multiprocessing. That no copy is made
# is indicated by the fact that the program prints the output shown below.
## No copy was made
##assert shared_array.base.base is shared_array_base.get_obj()
shared_array = None
def init(shared_array_base):
global shared_array
shared_array = np.ctypeslib.as_array(shared_array_base.get_obj())
shared_array = shared_array.reshape(10, 10)
# Parallel processing
def my_func(i):
shared_array[i, :] = i
if __name__ == '__main__':
shared_array_base = multiprocessing.Array(ctypes.c_double, 10*10)
pool = multiprocessing.Pool(processes=4, initializer=init, initargs=(shared_array_base,))
pool.map(my_func, range(10))
shared_array = np.ctypeslib.as_array(shared_array_base.get_obj())
shared_array = shared_array.reshape(10, 10)
print shared_array
對於那些堅持使用不支持fork()
的 Windows 的人(除非使用 CygWin),pv 的答案不起作用。 全局變量不可用於子進程。
相反,您必須在Pool
/ Process
的初始化期間傳遞共享內存,如下所示:
#! /usr/bin/python
import time
from multiprocessing import Process, Queue, Array
def f(q,a):
m = q.get()
print m
print a[0], a[1], a[2]
m = q.get()
print m
print a[0], a[1], a[2]
if __name__ == '__main__':
a = Array('B', (1, 2, 3), lock=False)
q = Queue()
p = Process(target=f, args=(q,a))
p.start()
q.put([1, 2, 3])
time.sleep(1)
a[0:3] = (4, 5, 6)
q.put([4, 5, 6])
p.join()
(它不是 numpy 也不是好的代碼,但它說明了這一點 ;-)
如果您正在尋找一種在 Windows 上高效工作的選項,並且適用於不規則訪問模式、分支和其他可能需要基於共享內存矩陣和進程本地數據的組合分析不同矩陣的場景, ParallelRegression包中的 mathDict 工具包旨在處理這種確切情況。
我知道,我正在回答一個非常古老的問題。 但此主題不適用於 Windows 操作系統。 上述答案在沒有提供實質性證據的情況下具有誤導性。 所以我嘗試了以下代碼。
# -*- coding: utf-8 -*-
from __future__ import annotations
import ctypes
import itertools
import multiprocessing
import os
import time
from concurrent.futures import ProcessPoolExecutor
import numpy as np
import numpy.typing as npt
shared_np_array_for_subprocess: npt.NDArray[np.double]
def init_processing(shared_raw_array_obj: ctypes.Array[ctypes.c_double]):
global shared_np_array_for_subprocess
#shared_np_array_for_subprocess = np.frombuffer(shared_raw_array_obj, dtype=np.double)
shared_np_array_for_subprocess = np.ctypeslib.as_array(shared_raw_array_obj)
def do_processing(i: int) -> int:
print("\n--------------->>>>>>")
print(f"[P{i}] input is {i} in process id {os.getpid()}")
print(f"[P{i}] 0th element via np access: ", shared_np_array_for_subprocess[0])
print(f"[P{i}] 1st element via np access: ", shared_np_array_for_subprocess[1])
print(f"[P{i}] NP array's base memory is: ", shared_np_array_for_subprocess.base)
np_array_addr, _ = shared_np_array_for_subprocess.__array_interface__["data"]
print(f"[P{i}] NP array obj pointing memory address is: ", hex(np_array_addr))
print("\n--------------->>>>>>")
time.sleep(3.0)
return i
if __name__ == "__main__":
shared_raw_array_obj: ctypes.Array[ctypes.c_double] = multiprocessing.RawArray(ctypes.c_double, 128) # 8B * 1MB = 8MB
# This array is malloced, 0 filled.
print("Shared Allocated Raw array: ", shared_raw_array_obj)
shared_raw_array_ptr = ctypes.addressof(shared_raw_array_obj)
print("Shared Raw Array memory address: ", hex(shared_raw_array_ptr))
# Assign data
print("Assign 0, 1 element data in Shared Raw array.")
shared_raw_array_obj[0] = 10.2346
shared_raw_array_obj[1] = 11.9876
print("0th element via ptr access: ", (ctypes.c_double).from_address(shared_raw_array_ptr).value)
print("1st element via ptr access: ", (ctypes.c_double).from_address(shared_raw_array_ptr + ctypes.sizeof(ctypes.c_double)).value)
print("Create NP array from the Shared Raw array memory")
shared_np_array: npt.NDArray[np.double] = np.frombuffer(shared_raw_array_obj, dtype=np.double)
print("0th element via np access: ", shared_np_array[0])
print("1st element via np access: ", shared_np_array[1])
print("NP array's base memory is: ", shared_np_array.base)
np_array_addr, _ = shared_np_array.__array_interface__["data"]
print("NP array obj pointing memory address is: ", hex(np_array_addr))
print("NP array , Raw array points to same memory , No copies? : ", np_array_addr == shared_raw_array_ptr)
print("Now that we have native memory based NP array , Send for multi processing.")
# results = []
with ProcessPoolExecutor(max_workers=4, initializer=init_processing, initargs=(shared_raw_array_obj,)) as process_executor:
results = process_executor.map(do_processing, range(0, 2))
print("All jobs sumitted.")
for result in results:
print(result)
print("Main process is going to shutdown.")
exit(0)
這是示例輸出
Shared Allocated Raw array: <multiprocessing.sharedctypes.c_double_Array_128 object at 0x000001B8042A9E40>
Shared Raw Array memory address: 0x1b804300000
Assign 0, 1 element data in Shared Raw array.
0th element via ptr access: 10.2346
1st element via ptr access: 11.9876
Create NP array from the Shared Raw array memory
0th element via np access: 10.2346
1st element via np access: 11.9876
NP array's base memory is: <multiprocessing.sharedctypes.c_double_Array_128 object at 0x000001B8042A9E40>
NP array obj pointing memory address is: 0x1b804300000
NP array , Raw array points to same memory , No copies? : True
Now that we have native memory based NP array , Send for multi processing.
--------------->>>>>>
[P0] input is 0 in process id 21852
[P0] 0th element via np access: 10.2346
[P0] 1st element via np access: 11.9876
[P0] NP array's base memory is: <memory at 0x0000021C7ACAFF40>
[P0] NP array obj pointing memory address is: 0x21c7ad60000
--------------->>>>>>
--------------->>>>>>
[P1] input is 1 in process id 11232
[P1] 0th element via np access: 10.2346
[P1] 1st element via np access: 11.9876
[P1] NP array's base memory is: <memory at 0x0000022C7FF3FF40>
[P1] NP array obj pointing memory address is: 0x22c7fff0000
--------------->>>>>>
All jobs sumitted.
0
1
Main process is going to shutdown.
以上輸出來自以下環境:
OS: Windows 10 20H2
Python: Python 3.9.9 (tags/v3.9.9:ccb0e6a, Nov 15 2021, 18:08:50) [MSC v.1929 64 bit (AMD64)]
您可以清楚地看到,numpy 的指向內存數組對於每個子進程都是不同的,這意味着創建了 memcopies。 所以在 Windows 操作系統中,子進程不共享底層內存。 我確實認為,由於操作系統保護,進程不能引用內存中的任意指針地址,這會導致內存訪問違規。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.