简体   繁体   中英

YARN hadoop error java heap space

I use YARN on hadoop 2.6.0. When I ran an mapreduce job, I got an error like this :

15/03/12 22:22:59 INFO mapreduce.Job: Task Id : attempt_1426132548565_0003_m_000002_1, Status : FAILED
Error: Java heap space
15/03/12 22:22:59 INFO mapreduce.Job: Task Id : attempt_1426132548565_0003_m_000000_1, Status : FAILED
Error: Java heap space
15/03/12 22:23:20 INFO mapreduce.Job: Task Id : attempt_1426132548565_0003_m_000002_2, Status : FAILED
Error: Java heap space
Container killed by the ApplicationMaster.

Am I false to configure the java.opts propery. Is that error because of that configuration? Are there any connection between memory settings on yarn-site and mapred-site?

I very confused, I need your suggest all Thanks

When the container exceeds the memory/CPU usage, it is killed by the application master. In your case, the mappers might be using excess memory. Try adding the following configurations:

In mapred-site.xml:

 <property>
   <name> mapreduce.map.memory.mb </name>
   <value>1024</value>
   <description>Enter The amount of memory to request from the scheduler for each map task. </description>
 </property>

The default value is 1024, try increasing it to 2048.

I would suggest you to restart the cluster after changing the configurations

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM