简体   繁体   中英

java.lang.OutOfMemoryError: Java heap space error while running seq2sparse in mahout

I am trying to cluster some hand made date using k-means in mahout. I have created 6 files with hardly 1 or 2 words text in each file. Created a sequence file out of them using ./mahout seqdirectory. While trying to convert sequence file to vector using ./mahout seq2sparse command, I am getting java.lang.OutOfMemoryError: Java heap space error. The size of sequence file is .215 KB.

Command : ./mahout seq2sparse -i mokha/output -o mokha/vector -ow

Error Log :

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/bitnami/mahout/mahout-distribution-0.5/m
ahout-examples-0.5-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/bitnami/mahout/mahout-distribution-0.5/l
ib/slf4j-jcl-1.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
Apr 24, 2013 2:25:11 AM org.slf4j.impl.JCLLoggerAdapter warn
WARNING: No seq2sparse.props found on classpath, will use command-line arguments
 only
Apr 24, 2013 2:25:12 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Maximum n-gram size is: 1
Apr 24, 2013 2:25:12 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Deleting mokha/vector
Apr 24, 2013 2:25:12 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Minimum LLR value: 1.0
Apr 24, 2013 2:25:12 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Number of reduce tasks: 1
Apr 24, 2013 2:25:12 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Initializing JVM Metrics with processName=JobTracker, sessionId=
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Running job: job_local_0001
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commi
ting
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate
INFO:
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0001_m_000000_0 is allowed to commit now
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapreduce.lib.output.FileOutputCommitt
er commitTask
INFO: Saved output of task 'attempt_local_0001_m_000000_0' to mokha/vector/token
ized-documents
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate
INFO:
Apr 24, 2013 2:25:12 AM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0001_m_000000_0' done.
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:  map 100% reduce 0%
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Job complete: job_local_0001
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 5
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:   FileSystemCounters
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     FILE_BYTES_READ=1471400
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     FILE_BYTES_WRITTEN=1496783
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:   Map-Reduce Framework
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     Map input records=6
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     Spilled Records=0
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.Counters log
INFO:     Map output records=6
Apr 24, 2013 2:25:13 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - al
ready initialized
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Running job: job_local_0002
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:13 AM org.apache.hadoop.mapred.MapTask$MapOutputBuffer <init>
INFO: io.sort.mb = 100
Apr 24, 2013 2:25:14 AM org.apache.hadoop.mapred.LocalJobRunner$Job run
WARNING: job_local_0002
java.lang.OutOfMemoryError: Java heap space
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:
781)
        at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.ja
va:524)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:613)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:1
77)
Apr 24, 2013 2:25:14 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:  map 0% reduce 0%
Apr 24, 2013 2:25:14 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Job complete: job_local_0002
Apr 24, 2013 2:25:14 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 0
Apr 24, 2013 2:25:14 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - al
ready initialized
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Running job: job_local_0003
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 1
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapred.MapTask$MapOutputBuffer <init>
INFO: io.sort.mb = 100
Apr 24, 2013 2:25:15 AM org.apache.hadoop.mapred.LocalJobRunner$Job run
WARNING: job_local_0003
java.lang.OutOfMemoryError: Java heap space
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:
781)
        at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.ja
va:524)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:613)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:1
77)
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:  map 0% reduce 0%
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Job complete: job_local_0003
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 0
Apr 24, 2013 2:25:16 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - al
ready initialized
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 0
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Running job: job_local_0004
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat li
stStatus
INFO: Total input paths to process : 0
Apr 24, 2013 2:25:16 AM org.apache.hadoop.mapred.LocalJobRunner$Job run
WARNING: job_local_0004
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
        at java.util.ArrayList.RangeCheck(ArrayList.java:547)
        at java.util.ArrayList.get(ArrayList.java:322)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:1
24)
Apr 24, 2013 2:25:17 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:  map 0% reduce 0%
Apr 24, 2013 2:25:17 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Job complete: job_local_0004
Apr 24, 2013 2:25:17 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 0
Apr 24, 2013 2:25:17 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Deleting mokha/vector/partial-vectors-0
Apr 24, 2013 2:25:17 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - al
ready initialized
Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputExc
eption: Input path does not exist: file:/home/bitnami/mahout/mahout-distribution
-0.5/bin/mokha/vector/tf-vectors
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(File
InputFormat.java:224)
        at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listSta
tus(SequenceFileInputFormat.java:55)
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileI
nputFormat.java:241)
        at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:7
79)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
        at org.apache.mahout.vectorizer.tfidf.TFIDFConverter.startDFCounting(TFI
DFConverter.java:350)
        at org.apache.mahout.vectorizer.tfidf.TFIDFConverter.processTfIdf(TFIDFC
onverter.java:151)
        at org.apache.mahout.vectorizer.SparseVectorsFromSequenceFiles.run(Spars
eVectorsFromSequenceFiles.java:262)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at org.apache.mahout.vectorizer.SparseVectorsFromSequenceFiles.main(Spar
seVectorsFromSequenceFiles.java:52)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(Progra
mDriver.java:68)
        at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
        at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:187)

the bin/mahout script reads environment variable 'MAHOUT_HEAPSIZE' (in megabytes) and sets the 'JAVA_HEAP_MAX' variable from it if it exists. The version of mahout that I'm using (0.8) has JAVA_HEAP_MAX set to 3G. Executing

export MAHOUT_HEAPSIZE=10000m

before a canopy clustering run seems to have helped my runs stay alive longer on a single machine. However, I suspect the best solution would be to switch to running on a cluster.

for reference, there's another related post: Mahout runs out of heap space

I don't know if you've tried this but just posting it in case you missed it.

'Set the environment variable 'MAVEN_OPTS' to allow for more memory via 'export MAVEN_OPTS=-Xmx1024m'

Refer (in the common problems section) here

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM