简体   繁体   中英

Why YARN java heap space memory error?

I want to try about setting memory in YARN, so I'll try to configure some parameter on yarn-site.xml and mapred-site.xml. By the way I use hadoop 2.6.0. But, I get an error when I do a mapreduce job. It says like this :

15/03/12 10:57:23 INFO mapreduce.Job: Task Id :
attempt_1426132548565_0001_m_000002_0, Status : FAILED
Error: Java heap space
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

I think that I have configured it right, I give map.java.opts and reduce.java.opts the small size = 64 MB. I've try to configure some parameter then, like change the map.java.opts and reduce.java.opts on mapred-site.xml, and I still get this error. I think that I do not really understand how YARN memory works. BY the way for this I try on single node computer.

Yarn handles resource management and also serves batch workloads that can use MapReduce and real-time workloads.

There are memory settings that can be set at the Yarn container level and also at the mapper and reducer level. Memory is requested in increments of the Yarn container size. Mapper and reducer tasks run inside a container.

mapreduce.map.memory.mb and mapreduce.reduce.memory.mb

above parameters describe upper memory limit for the map-reduce task and if memory subscribed by this task exceeds this limit, the corresponding container will be killed.

These parameters determine the maximum amount of memory that can be assigned to mapper and reduce tasks respectively. Let us look at an example: Mapper is bound by an upper limit for memory which is defined in the configuration parameter mapreduce.map.memory.mb .

However, if the value for yarn.scheduler.minimum-allocation-mb is greater than this value of mapreduce.map.memory.mb , then the yarn.scheduler.minimum-allocation-mb is respected and the containers of that size are given out.

This parameter needs to be set carefully and if not set properly, this could lead to bad performance or OutOfMemory errors.

mapreduce.reduce.java.opts and mapreduce.map.java.opts

This property value needs to be less than the upper bound for map/reduce task as defined in mapreduce.map.memory.mb/mapreduce.reduce.memory.mb , as it should fit within the memory allocation for the map/reduce task.

What @Gaurav said is correct. I had similar issue,i tried some thing like below.Include below properties in yarn-site.xml and restart VM

<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
<description>Whether virtual memory limits will be enforced for    containers</description>
</property>

<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>4</value>
<description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
</property>

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM