简体   繁体   English

我无法在独立模式下配置的hadoop上执行map-reduce作业

[英]I cannot execute a map-reduce job on hadoop configured in standalone mode

I am trying to test a very simple hadoop map-reduce job on my computer (MacOS 10.7) on the local filesystem (in standalone mode). 我试图在本地文件系统(独立模式)上测试我的计算机(MacOS 10.7)上的一个非常简单的hadoop map-reduce作业。 The job takes a .csv file (data-01) and counts the occurrences of some fields. 该作业采用.csv文件(data-01)并计算某些字段的出现次数。

I downloaded CDH4 hadoop, ran the job, it seemed to start normally but after all the split were processed I got the following error: 我下载了CDH4 hadoop,运行该作业,它似乎正常启动但是在处理完所有拆分后我得到以下错误:

13/03/12 12:11:18 INFO mapred.MapTask: Processing split: file:/path/in/data-01:9999220736+33554432
13/03/12 12:11:18 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
13/03/12 12:11:18 INFO mapred.LocalJobRunner: Starting task: attempt_local2133287029_0001_m_000299_0
13/03/12 12:11:18 INFO mapred.Task:  Using ResourceCalculatorPlugin : null
13/03/12 12:11:18 INFO mapred.MapTask: Processing split: file:/path/in/data-01:10032775168+33554432
13/03/12 12:11:18 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
13/03/12 12:11:18 INFO mapred.LocalJobRunner: Starting task: attempt_local2133287029_0001_m_000300_0
13/03/12 12:11:18 INFO mapred.Task:  Using ResourceCalculatorPlugin : null
13/03/12 12:11:18 INFO mapred.MapTask: Processing split: file:/path/in/data-01:10066329600+33554432
13/03/12 12:11:18 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
13/03/12 12:11:18 INFO mapred.LocalJobRunner: Starting task: attempt_local2133287029_0001_m_000301_0
13/03/12 12:11:18 INFO mapred.Task:  Using ResourceCalculatorPlugin : null
13/03/12 12:11:18 INFO mapred.MapTask: Processing split: file:/path/in/data-01:10099884032+33554432
13/03/12 12:11:18 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
13/03/12 12:11:18 INFO mapred.LocalJobRunner: Starting task: attempt_local2133287029_0001_m_000302_0
13/03/12 12:11:18 INFO mapred.Task:  Using ResourceCalculatorPlugin : null
13/03/12 12:11:18 INFO mapred.MapTask: Processing split: file:/path/in/data-01:10133438464+32025555
13/03/12 12:11:18 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
13/03/12 12:11:19 INFO mapred.LocalJobRunner: Map task executor complete.
13/03/12 12:11:19 WARN mapred.LocalJobRunner: job_local2133287029_0001
java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:399)
Caused by: java.lang.OutOfMemoryError: Java heap space
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:949)
    at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:389)
    at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:78)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:668)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:740)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:338)
    at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:231)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
    at java.util.concurrent.FutureTask.run(FutureTask.java:166)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:722)
13/03/12 12:11:19 INFO mapreduce.Job: Job job_local2133287029_0001 failed with state FAILED due to: NA
13/03/12 12:11:19 INFO mapreduce.Job: Counters: 0

I get the same error no matter how small the input file is... 无论输入文件有多小,我都会得到同样的错误......

It happened that the default options were superseding my local configuration (I still don't understand why). 碰巧默认选项取代了我的本地配置(我仍然不明白为什么)。

export HADOOP_CLIENT_OPTS="-Xmx1024m"

solved the problem. 解决了这个问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM