简体   繁体   English

分别保存模拟数据集(速度+内存限制)

[英]Save simulated datasets individually (speed + memory limit)

I am working with a very large number (say two million) simulated datasets that are created in a for-loop and listed in groups of 2000. I would like to save all 1000 lists of 2000 datasets somewhere, such that I can perform any analysis without having to generate data again. 我正在处理大量(例如200万个)模拟数据集,这些数据集是在for循环中创建的,并按2000组列出。我想将2000个数据集的所有1000个列表保存在某个地方,以便我可以执行任何分析无需再次生成数据。 Saving all two million datasets in a nested list exceeds memory, so that is not an option. 将所有两百万个数据集保存在嵌套列表中会超出内存,因此这不是一种选择。 Therefore, I tried to save them per sublist in a workspace: 因此,我尝试将它们保存在工作空间中的每个子列表中:

# Generate data
data_list <- vector("list", 2000)

for(i in 1:1000){
    for(j in 1:2000){
        dataA <- cbind(rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j))
        dataB <- cbind(rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j))
        data_list[[j]] <- dataA-dataB
    }

# Write to workspace
assign(paste("Data",i,sep=""), data_list)

# Add to existing workspace and remove object (to save memory).
if(file.exists("Workspaces.RData")){  
    old.objects <- load("Workspaces.RData")
    save(list=c(old.objects, paste("Data",i,sep="")),file="Workspaces.RData") 
    rm(list=c(old.objects,paste("Data",i,sep="")))

# Or create new workspace if it does not exist
}else{
    save(list=paste("Data",i,sep=""), file="Workspaces.RData")
    rm(list=paste("Data",i,sep=""))}
}

This is a very slow solution for the number and sizes of datasets I am working with, so I was wondering whether anyone has a better solution to store and load generated datasets. 对于我正在使用的数据集的数量和大小,这是一个非常慢的解决方案,因此我想知道是否有人能够更好地存储和加载生成的数据集。

Thanks in advance! 提前致谢!

As mentioned by F.Privé, if you need to save those files, better use saveRDS. 如F.Privé所述,如果需要保存这些文件,最好使用saveRDS。 In that case you are not doing redundant saving and loading. 在这种情况下,您不会进行多余的保存和加载。

jj <- 1:2000
for(i in 1:10){
  for(j in jj){
    dataA <- cbind(rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j))
    dataB <- cbind(rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j),rnorm(j))
    data_list[[j]] <- dataA-dataB
  }
  saveRDS(data_list, paste0("Data", i, '.rds'))
}

As of this particular data simulation, I would try to avoid loops. 从这种特定的数据模拟开始,我将尝试避免循环。 Generating all the data at once (or at least in parts) and then storing to data.frame with index column. 一次(或至少部分)生成所有数据,然后将其存储到带有索引列的data.frame中。 Something like: 就像是:

dataA <- replicate(8, rnorm(sum(jj)))
dataB <- replicate(8, rnorm(sum(jj)))
data_list <- dataA - dataB
data <- as.data.frame(data_list)
data[, "ind"] <- rep(jj, times = jj)

But as I assume this is not your real data simulation, it is crucial to understand why you are simulating 1k list of 2k data sets? 但是,因为我认为这不是您的真实数据模拟,所以了解为什么要模拟2k数据集的1k列表至关重要。 Do they all need to be in separated lists? 他们都需要放在分开的列表中吗? All are simulated equally? 都一样模拟吗? and so on... 等等...

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM