[英]How to improve np.random.choice() looping efficiency
I am trying to apply np.random.choice
to a big array with different weights, and wondering any way could avoid looping and improve the performance? 我正在尝试将np.random.choice
应用于具有不同权重的大型数组,并且想知道有什么方法可以避免循环并提高性能吗? Over here len(weights)
could be millions. 在这里, len(weights)
可能是数百万。
weights = [[0.1, 0.5, 0.4],
[0.2, 0.4, 0.4],
...
[0.3, 0.3, 0.4]]
choice = [1, 2, 3]
ret = np.zeros((len(weights), 20))
for i in range(len(weights)):
ret[i] = np.random.choice(choice, 20, p=weights[i])
Here's a generalization of my answer in Fast random weighted selection across all rows of a stochastic matrix : 这是我对随机矩阵所有行中的快速随机加权选择的回答的概括:
def vectorized_choice(p, n, items=None):
s = p.cumsum(axis=1)
r = np.random.rand(p.shape[0], n, 1)
q = np.expand_dims(s, 1) >= r
k = q.argmax(axis=-1)
if items is not None:
k = np.asarray(items)[k]
return k
p
is expected to be a two-dimensional array whose rows are probability vectors. p
应该是一个二维数组,其行是概率向量。 n
is the number of samples to draw from the distribution defined by each row. n
是从每一行定义的分布中抽取的样本数。 If items
is None, the samples are integers in range(0, p.shape[1])
. 如果items
为None,则样本为range(0, p.shape[1])
中的整数。 If items
is not None, it is expected to be a sequence with length p.shape[1]
. 如果items
不为None,则应该是长度为p.shape[1]
的序列。
Example: 例:
In [258]: p = np.array([[0.1, 0.5, 0.4], [0.75, 0, 0.25], [0, 0, 1], [1/3, 1/3, 1/3]])
In [259]: p
Out[259]:
array([[0.1 , 0.5 , 0.4 ],
[0.75 , 0. , 0.25 ],
[0. , 0. , 1. ],
[0.33333333, 0.33333333, 0.33333333]])
In [260]: vectorized_choice(p, 20)
Out[260]:
array([[1, 1, 2, 1, 1, 2, 2, 2, 1, 2, 1, 1, 1, 2, 2, 0, 1, 2, 2, 2],
[0, 2, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[1, 0, 2, 2, 0, 1, 2, 1, 0, 0, 0, 0, 2, 2, 0, 0, 2, 1, 1, 2]])
In [261]: vectorized_choice(p, 20, items=[1, 2, 3])
Out[261]:
array([[2, 1, 2, 2, 2, 3, 2, 2, 2, 2, 3, 3, 2, 2, 3, 3, 2, 3, 2, 2],
[1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1, 1, 3, 3, 1, 3, 1, 1, 1],
[3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3],
[3, 3, 3, 1, 3, 2, 1, 2, 3, 1, 2, 2, 3, 2, 1, 2, 1, 2, 2, 2]])
Timing for p
with shape (1000000, 3)
: 形状为p
(1000000, 3)
:
In [317]: p = np.random.rand(1000000, 3)
In [318]: p /= p.sum(axis=1, keepdims=True)
In [319]: %timeit vectorized_choice(p, 20, items=np.arange(1, p.shape[1]+1))
1.89 s ± 28.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Here's the timing for Divakar's function: 以下是Divakar功能的时间安排:
In [320]: %timeit random_choice_prob_vectorized(p, 20, choice=np.arange(1, p.shape[1]+1))
7.33 s ± 43.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
The difference will be less pronounced if you increase the number of columns in p
, and if you make the number of columns big enough, Divakar's function will be faster. 如果增加p
的列数,则差异将不太明显;如果使列数足够大,则Divakar的函数将更快。 Eg 例如
In [321]: p = np.random.rand(1000, 120)
In [322]: p /= p.sum(axis=1, keepdims=True)
In [323]: %timeit vectorized_choice(p, 20, items=np.arange(1, p.shape[1]+1))
6.41 ms ± 20.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [324]: %timeit random_choice_prob_vectorized(p, 20, choice=np.arange(1, p.shape[1]+1))
6.29 ms ± 342 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Borrowing idea from Vectorizing numpy.random.choice
for given 2D array of probabilities along an axis alongwith idea from vectorized searchsorted
, here's one vectorized way - 从Vectorizing numpy.random.choice
借用一个给定的2D概率阵列沿轴的 想法, 以及 从vectorized searchsorted
想法中searchsorted
,这是一种矢量化方式-
def random_choice_prob_vectorized(weights, num_items, choice=None):
weights = np.asarray(weights)
w = weights.cumsum(1)
r = np.random.rand(len(weights),num_items)
m,n = w.shape
o = np.arange(m)[:,None]
w_o = (w+o).ravel()
r_o = (r+o).ravel()
idx = np.searchsorted(w_o,r_o).reshape(m,-1)%n
if choice is not None:
return np.asarray(choice)[idx]
else:
return idx
Sample run to verify using 2D bincount
- 样本运行以使用2D bincount
进行验证-
In [28]: weights = [[0.1, 0.5, 0.4],
...: [0.2, 0.4, 0.4],
...: [0.3, 0.3, 0.4]]
...:
...: choice = [1, 2, 3]
...: num_items = 20000
In [29]: out = random_choice_prob_vectorized(weights, num_items, choice)
# Use 2D bincount to get per average occurences and verify against weights
In [75]: bincount2D_vectorized(out)/num_items
Out[75]:
array([[0. , 0.09715, 0.4988 , 0.40405],
[0. , 0.1983 , 0.40235, 0.39935],
[0. , 0.30025, 0.29485, 0.4049 ]])
Looks like each row of the resulting array is independent of other rows. 看起来结果数组的每一行都独立于其他行。 I am not sure how bad is the performance now. 我不确定现在的表现有多糟糕。 If it really is a concern, I would try to use python's multiprocessing
module to run random number generations with several processes in parallel. 如果确实有问题,我将尝试使用python的multiprocessing
模块来并行运行多个进程的随机数生成。 It should help. 应该会有所帮助。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.