简体   繁体   English

read.csv.ffdf使用ff和ffbase包卡在R中

[英]read.csv.ffdf gets stuck in R using the ff and ffbase package

I ran read.csv.ffdf from the ff or ffbase package. 我从ff或ffbase包运行了read.csv.ffdf。 But it got stuck with the following message 但它坚持以下消息

read.table.ffdf 1..1000 (1000)  csv-read=0.15sec ffdf-write=1.24sec
read.table.ffdf 1001..63015 (62015)  csv-read=9.89sec ffdf-write=8.84sec
read.table.ffdf 63016..125030 (62015)  csv-read=8.18sec ffdf-write=6.74sec
read.table.ffdf 125031..187045 (62015)  csv-read=8.15sec ffdf-write=6.72sec
read.table.ffdf 187046..249060 (62015)  csv-read=8.19sec ffdf-write=6.72sec
read.table.ffdf 249061..311075 (62015)  csv-read=8.05sec ffdf-write=7.13sec
read.table.ffdf 311076..373090 (62015)  csv-read=8.26sec ffdf-write=6.73sec
read.table.ffdf 373091..435105 (62015)  csv-read=8.26sec ffdf-write=6.8sec

The files contains approximately 800000 rows but why does it stop here and I can't run anything from this point onwards? 这些文件包含大约800000行,但为什么它停在这里,从这一点起我无法运行任何东西?

Are you using the transFUN argument of read.csv.ffdf? 你在使用read.csv.ffdf的transFUN参数吗? If yes, make sure the number of rows which you return from transFUN is the same as the number of rows of the data.frame entering transFUN. 如果是,请确保从transFUN返回的行数与进入transFUN的data.frame的行数相同。

Explanation: Importing of read.csv.ffdf stops when a chunk returns less records then the next.rows argument, so if you are subsetting in transFUN, it will stop. 说明:当一个块返回的记录少于next.rows参数时,read.csv.ffdf的导入会停止,所以如果你在transFUN中进行子集化,它将会停止。 transFUN is for adding derived fields or cleaning up data, not subsetting. transFUN用于添加派生字段或清理数据,而不是子集。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM