简体   繁体   English

在R中运行LDA主题模型时如何使用8个内核

[英]How to use 8 cores while running LDA topic model in R

I am running a Latent Dirichlet topic model in R using the follwing code: 我正在使用以下代码在R中运行Latent Dirichlet主题模型:

for(k in 2:30) {
    ldaOut <-LDA(dtm,k, method="Gibbs", 
                 control=list(nstart=nstart, seed = seed, best=best, 
                              burnin = burnin, iter = iter, thin=thin))
    assign(paste("ldaOut", k, sep = "_"), ldaOut)
}

The dtm has 12 million elements, and each loop takes up to two hours on average. dtm有1200万个元素,每个循环平均最多需要两个小时。 Meanwhile, R uses only 1 of my 8 logical processors ( i have i7-2700K CPU @ 3.50GHz wtih 4 cores). 同时,R仅使用我的8个逻辑处理器中的1个(我有i7-2700K CPU @ 3.50GHz,具有4个内核)。 How can I make R use all the computational power available when I run one LDA topic model or when using a loop (as in this code)? 当我运行一个LDA主题模型或使用循环时(如此代码中所示),如何使R使用所有可用的计算能力?

Thank you 谢谢

EDIT: follwing gc_'s advice, I used the following code: 编辑:按照gc_的建议,我使用了以下代码:

library(doParallel)

    n.cores <- detectCores(all.tests = T, logical = T) 
    cl <- makePSOCKcluster(n.cores) 

doParallel::registerDoParallel(cl)

burnin <- 4000 
iter <- 2000
thin <- 500 
seed <-list(2003,10,100,10005,765)
nstart <- 5 
best <- TRUE 

var.shared <- c("ldaOut", "dtm", "nstart", "seed", "best", "burnin", "iter", "thin", "n.cores")
library.shared <- "topicmodels" # Same for library or functions.


ldaOut <- c()

    foreach (k = 2:(30 / n.cores - 1), .export = var.shared, .packages = library.shared) %dopar% {
        ret <- LDA(dtm, k*n.cores , method="Gibbs", 
                   control=list(nstart=nstart, seed = seed, best=best, 
                                burnin = burnin, iter = iter, thin=thin))
        assign(paste("ldaOut", k*n.cores, sep = "_"), ret)
    }

The code ran without errors, but now there are 16 "R for Windows front-end" processes, 15 of which use 0% of the CPU and 1 is using 16-17%...And when the process was over i got this message: 代码运行无误,但是现在有16个“ R for Windows前端”进程,其中15个使用0%的CPU,1个使用16-17%...并且当进程结束时,我得到了这个信息:

A LDA_Gibbs topic model with 16 topics.

    Warning messages:
    1: In e$fun(obj, substitute(ex), parent.frame(), e$data) :
      already exporting variable(s): dtm, nstart, seed, best, burnin, iter, thin, n.cores
    2: closing unused connection 10 (<-MyPC:11888) 
    3: closing unused connection 9 (<-MyPC:11888) 
    4: closing unused connection 8 (<-MyPC:11888) 
    5: closing unused connection 7 (<-MyPC:11888) 
    6: closing unused connection 6 (<-MyPC:11888) 
    7: closing unused connection 5 (<-MyPC:11888) 
    8: closing unused connection 4 (<-MyPC:11888) 
    9: closing unused connection 3 (<-MyPC:11888) 

You can use the library doParallel 您可以使用库doParallel

library(doParallel)

To get the number of cores of your computer: 要获取计算机的内核数:

n.cores <- detectCores(all.tests = T, logical = T) 

You can see the distinction between logical and physical cores. 您可以看到逻辑核心和物理核心之间的区别。

Now you need to assign the core and set up all the process: 现在,您需要分配核心并设置所有过程:

cl <- makePSOCKcluster(n.cores) 
doParallel::registerDoParallel(cl)

You can create more processes than you have cores on your computer. 您可以创建的进程数量多于计算机上拥有的内核数量。 As R is creating new processes you need to define the library and variables you need to share with the workers. 在R创建新流程时,您需要定义与工作人员共享的库和变量。

var.shared <- c("ldaOut", "dtm", "nstart", "seed", "best", "burnin", "iter", "thin", "n.cores")
library.shared <- c() # Same for library or functions.

Then the loop will change to: 然后循环将变为:

 ldaOut <- #Init the output#

 foreach (k = 2:(30 / n.cores - 1), .export = var.shared, .packages = library.shared)) %dopar% {
      ret <- LDA(dtm, k*n.cores , method="Gibbs", 
                     control=list(nstart=nstart, seed = seed, best=best, 
                                  burnin = burnin, iter = iter, thin=thin))
      assign(paste("ldaOut", k*n.cores, sep = "_"), ret)
}

I have never used LDA before so you might need to modify a bit the code above in order to make it works. 我以前从未使用过LDA,因此您可能需要修改上面的代码以使其起作用。

I think lda is hard to do in parallel, since each sweep uses the result of the previous sweep. 我认为lda很难并行执行,因为每次扫描都使用前一次扫描的结果。

So to speed things up, you could imo 因此,为了加快速度,您可以

- reduce your dtm
- use faster libraries e.g. vowpal wabbit 
- use faster hardware e.g. aws

If you optimize for "hyperparameters" like alpha, eta, burnin, etc., you could run the full lda with different hyperparams on each core. 如果针对“超参数”(例如alpha,eta,burnin等)进行优化,则可以在每个内核上使用不同的超参数运行完整的lda。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM