[英]Efficient Parallelization of small outer loops and big inner loop in R
I have the following R
code我有以下
R
代码
LLL = list()
idx = 1
for(i in 1:3){
for(j in 1:9){
for(k in 1:13){
for(iter in 1:1000000){
if( runif(1,0,1)<0.5 ){
LLL[[idx]] = rnorm(1,0,1)
idx = idx + 1
}
}
}
}
}
Is there a way to parallelize efficiently this code?有没有办法有效地并行化这段代码?
What I was thinking is that I have 351
configurations of i,j,k
, If I could distribute these configurations to cores and each core would run a for
loop for 1000000
iterations, can something similar to that be implemented??我在想的是,我有
351
个i,j,k
配置,如果我可以将这些配置分配给内核,并且每个内核都可以运行for
循环1000000
次迭代,是否可以实现类似的东西?
rnorm()
one million times, it would be more efficient to call it once with the argument n = 1000000
.rnorm()
一百万次不同,使用参数n = 1000000
调用一次会更有效。for()
-loops.for()
循环。 We can instead first create an object that represents your 351 configurations and then iterate on that object. Create configurations:创建配置:
cfgs <-
expand_grid(i = 1:3,
j = 1:9,
k = 1:13)
Code without parallelization.没有并行化的代码。
cfgs |>
split(1:nrow(cfgs)) |>
lapply(\(x) rnorm(100000, 0, 1))
In order to parallelize the execution of the code we can use the furrr
package.为了并行化代码的执行,我们可以使用
furrr
包。
library(furrr)
plan(multisession)
cfgs |>
split(1:nrow(cfgs)) |>
future_map(\(x) rnorm(100000, 0, 1), .options = furrr_options(seed=TRUE))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.