简体   繁体   中英

Read a large (1.5 GB) file in h2o R

I am using h2o package for modelling in R. For this I want to read a dataset which has a size of about 1.5 GB using h2o.importfile(). I start the h2o server using the lines

library(h2oEnsemble)
h2o.init(max_mem_size = '1499m',nthreads=-1)

This produces a log

H2O is not running yet, starting it now...
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) Client VM (build 25.121-b13, mixed mode)

Starting H2O JVM and connecting: . Connection successful!

R is connected to the H2O cluster: 
H2O cluster uptime:         3 seconds 665 milliseconds 
H2O cluster version:        3.10.4.8 
H2O cluster version age:    28 days, 14 hours and 36 minutes  
H2O cluster name:           H2O_started_from_R_Lucifer_jvn970 
H2O cluster total nodes:    1 
H2O cluster total memory:   1.41 GB 
H2O cluster total cores:    4 
H2O cluster allowed cores:  4 
H2O cluster healthy:        TRUE 
H2O Connection ip:          localhost 
H2O Connection port:        54321 
H2O Connection proxy:       NA 
H2O Internal Security:      FALSE 
R Version:                  R version 3.3.2 (2016-10-31)` 

The following line gives me an error train=h2o.importFile(path=normalizePath("C:\\\\Users\\\\All data\\\\traindt.rds"))

DistributedException from localhost/127.0.0.1:54321, caused by java.lang.AssertionError

DistributedException from localhost/127.0.0.1:54321, caused by java.lang.AssertionError
at water.MRTask.getResult(MRTask.java:478)
at water.MRTask.getResult(MRTask.java:486)
at water.MRTask.doAll(MRTask.java:402)
at water.parser.ParseDataset.parseAllKeys(ParseDataset.java:246)
at water.parser.ParseDataset.access$000(ParseDataset.java:27)
at water.parser.ParseDataset$ParserFJTask.compute2(ParseDataset.java:195)
at water.H2O$H2OCountedCompleter.compute(H2O.java:1315)
at jsr166y.CountedCompleter.exec(CountedCompleter.java:468)
at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263)
at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974)
at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477)
at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
Caused by: java.lang.AssertionError
at water.parser.Categorical.addKey(Categorical.java:41)
at water.parser.FVecParseWriter.addStrCol(FVecParseWriter.java:127)
at water.parser.CsvParser.parseChunk(CsvParser.java:133)
at water.parser.Parser.readOneFile(Parser.java:187)
at water.parser.Parser.streamParseZip(Parser.java:217)
at water.parser.ParseDataset$MultiFileParseTask.streamParse(ParseDataset.java:907)
at water.parser.ParseDataset$MultiFileParseTask.map(ParseDataset.java:856)
at water.MRTask.compute2(MRTask.java:601)
at water.H2O$H2OCountedCompleter.compute1(H2O.java:1318)
at water.parser.ParseDataset$MultiFileParseTask$Icer.compute1(ParseDataset$MultiFileParseTask$Icer.java)
at water.H2O$H2OCountedCompleter.compute(H2O.java:1314)
... 5 more

Error: DistributedException from localhost/127.0.0.1:54321, caused by java.lang.AssertionError

Any help on how to fix this problem? Note: Assigning memory larger than 1499mb also gives me an error (cannot allocate memory). I am using a 16GB ram environment

Edit: I download the 64-bit version of Java and changed my file to a csv file. I was then able to assign max_mem_size to 5G and the problem was solved.

For others who face the problem: 1. Download the latest version of 64 bit jdk 2. Execute the following line line

h2o.init(max_mem_size = '5g',nthreads=-1)

You are running with 32-bit java, which is limiting the memory that you are able to start H2O with. One clue is that it won't start with a higher max_mem_size. Another clue is that it says "Client VM".

You want 64-bit java instead. The 64-bit version will say "Server VM". You can download the Java 8 SE JDK from here:

http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

Based on what you've described, I recommend setting max_mem_size = '6g' or more, which will work fine on your system once you have the right version of Java installed.

train=h2o.importFile(path=normalizePath("C:\\Users\\All data\\traindt.rds")

Are you trying to load an .rds file? That's an R binary format which is not readable by h2o.importFile() , so that won't work. You will need to store your training data in a cross-platform storage format (eg CSV, SMVLight, etc) if you want to read it into H2O directly. If you don't have a copy in another format, then just save one from R:

# loads a `train` data.frame for example
load("C:\\Users\\All data\\traindt.rds")

# save as CSV
write.csv(train, "C:\\Users\\All data\\traindt.csv")

# import from CSV into H2O cluster directly
train = h2o.importFile(path = normalizePath("C:\\Users\\All data\\traindt.csv"))

Another option is to load it into R from the .rds file and use the as.h2o() function:

# loads a `train` data.frame for example
load("C:\\Users\\All data\\traindt.rds")

# send to H2O cluster
hf <- as.h2o(train)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM