[英]How would I use Dask to perform parallel operations on slices of NumPy arrays?
我有一個大小為n_slice x 2048 x 3的numpy坐標數組,其中n_slice是數萬個。 我想分別在每個2048 x 3切片上應用以下操作
import numpy as np
from scipy.spatial.distance import pdist
# load coor from a binary xyz file, dcd format
n_slice, n_coor, _ = coor.shape
r = np.arange(n_coor)
dist = np.zeros([n_slice, n_coor, n_coor])
# this loop is what I want to parallelize, each slice is completely independent
for i in xrange(n_slice):
dist[i, r[:, None] < r] = pdist(coor[i])
我試着用DASK通過使coor
一個dask.array
,
import dask.array as da
dcoor = da.from_array(coor, chunks=(1, 2048, 3))
但是簡單地用dcoor
替換coor
不會暴露並行性。 我可以看到設置並行線程為每個切片運行但是如何利用Dask來處理並行性?
這是使用concurrent.futures
的並行實現
import concurrent.futures
import multiprocessing
n_cpu = multiprocessing.cpu_count()
def get_dist(coor, dist, r):
dist[r[:, None] < r] = pdist(coor)
# load coor from a binary xyz file, dcd format
n_slice, n_coor, _ = coor.shape
r = np.arange(n_coor)
dist = np.zeros([n_slice, n_coor, n_coor])
with concurrent.futures.ThreadPoolExecutor(max_workers=n_cpu) as executor:
for i in xrange(n_slice):
executor.submit(get_dist, cool[i], dist[i], r)
這個問題可能不適合Dask,因為沒有塊間計算。
map_blocks
map_blocks方法可能會有所幫助:
dcoor.map_blocks(pdist)
看起來你正在做一些花哨的切片,將特定值插入到輸出數組的特定位置。 這可能與dask.arrays有些尷尬。 相反,我建議制作一個產生numpy數組的函數
def myfunc(chunk):
values = pdist(chunk[0, :, :])
output = np.zeroes((2048, 2048))
r = np.arange(2048)
output[r[:, None] < r] = values
return output
dcoor.map_blocks(myfunc)
delayed
在最糟糕的情況下,您始終可以使用dask.delayed
from dask import delayed, compute
coor2 = delayed(coor)
slices = [coor2[i] for i in range(coor.shape[0])]
slices2 = [delayed(pdist)(slice) for slice in slices]
results = compute(*slices2)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.