[英]full_join() two data frames in segments/batches in r
I have two data frames that I am trying to merge. 我有两个要合并的数据框。
df1
has dimensions 20015 rows and 7 variables. df1
具有维度20015行和7个变量。 df2
has dimensions 8534664 rows and 29 variables. df2
尺寸为8534664行和29个变量。
When I do full_join(df1, df2, by = "KEY")
I get the Error: cannot allocate vector of size 891.2 Mb
so I set memory.limit(1000000)
and I still get the same error. 当我执行
full_join(df1, df2, by = "KEY")
,出现Error: cannot allocate vector of size 891.2 Mb
因此我设置了memory.limit(1000000)
,但仍然收到相同的错误。 I run the full_join()
whilst looking at my CPU usage graph in the windows task manager and it increases exponentially. 我在Windows任务管理器中查看我的CPU使用率图时运行了
full_join()
,它呈指数增长。 I have also used gc()
through out my code. 我在整个代码中也使用了
gc()
。
My question is, is there a function out there which can join the first 1,000,000
rows. 我的问题是,有没有可以加入前
1,000,000
行的函数。 Take a break, then join the next 1,000,000
rows etc. until all rows have been joined. 休息一下,然后加入下
1,000,000
行, 1,000,000
类推,直到所有行都已加入。
Is there a function to run the full_join()
in batches? 是否有一个函数可以批量运行
full_join()
?
This is just to report the time it takes running with full_join
and merge
from data.table
in a 64 bit Windows system(Intel ~3.5 Ghz, RAM 120GB). 这只是报告使用
full_join
运行并从64位Windows系统( data.table
Ghz,RAM 120GB)中的full_join
merge
花费的时间。 Hope it will help at least as a reference for your case. 希望它至少可以为您的案例提供参考。
library(data.table)
df1 <- data.table(KEY=sample(1:800,20015,replace = TRUE),
matrix(rnorm(20015*7),20015,7))#1.1MB
df2 <- data.table(KEY=sample(1:800,8534664,replace = TRUE),
matrix(rnorm(8534664*29),8534664,29))#1.9GB
library(dplyr)
tick <- Sys.time()
df_join <- full_join(df1, df2, by = "KEY") #~58.1 GB in memory
tock <- Sys.time()- tick #~1.85min
#With data.table merge.
tick <- Sys.time()
df_join<- merge(df1, df2, by = "KEY", allow.cartesian = TRUE)#~58.1 GB in memory
tock <- Sys.time() - tick #~5.75 mins
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.