简体   繁体   English

Hadoop-启动YARN服务时Java Runtime Environment的内存不足

[英]Hadoop - Insufficient memory for the Java Runtime Environment while starting YARN service

I have set up a cluster (1-master & 2-slaves(slave1, slave2)) based on the tutorial http://pingax.com/install-apache-hadoop-ubuntu-cluster-setup . 我已经根据教程http://pingax.com/install-apache-hadoop-ubuntu-cluster-setup设置了集群(1-master和2-slaves(slave1,slave2))。 When I ran for the first time both HDFS & YARN services ran without any problem. 当我第一次运行时, HDFSYARN服务都可以正常运行。 But when I stopped an ran them again, I got the following while running YARN service ( start-yarn.sh ) from the master. 但是当我再次停止运行它们时,我从主start-yarn.sh运行YARN服务( start-yarn.sh )时得到了以下内容。

# starting yarn daemons
# starting resourcemanager, logging to /local/hadoop/logs/yarn-dev-resourcemanager-login200.out
# 
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 168 bytes for AllocateHeap
# An error report file with more information is saved as: /local/hadoop/hs_err_pid21428.log

Compiler replay data is saved as: /local/hadoop/replay_pid21428.log
slave1: starting nodemanager, logging to /local/hadoop/logs/yarn-dev-nodemanager-login198.out
slave2: starting nodemanager, logging to /local/hadoop/logs/yarn-dev-nodemanager-login199.out
slave2: #
slave2: # There is insufficient memory for the Java Runtime Environment to continue.
slave2: # Native memory allocation (malloc) failed to allocate 168 bytes for AllocateHeap
slave2: # An error report file with more information is saved as:
slave2: # /local/hadoop/hs_err_pid27199.log
slave2: #
slave2: # Compiler replay data is saved as:
slave2: # /local/hadoop/replay_pid27199.log

Based on the suggestions from out of Memory Error in Hadoop and "Java Heap space Out Of Memory Error" while running a mapreduce program , I varied the heap memory size limit to 256, 512, 1024 & 2048 in all the 3 files ~/.bashrc , hadoop-env.sh and mapred-site.sh but nothing worked. 根据Hadoop中内存 不足错误和运行mapreduce程序时出现的“ Java堆空间内存不足错误”的建议 ,我将3个文件~/.bashrcheap memory大小限制分别更改为256、512、1024和2048 ~/.bashrchadoop-env.shmapred-site.sh但没有任何效果。

Note: I'm not an expert on Linux nor JVM. 注意:我不是Linux或JVM的专家。

Log file contents from one of the nodes: 来自以下节点之一的日志文件内容:

# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 32784 bytes for Chunk::new
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (allocation.cpp:390), pid=16375, tid=0x00007f39a352c700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_102-b14) (build 1.8.0_102-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.102-b14 mixed mode linux-amd64 compressed oops)
# Core dump written. Default location: /local/hadoop/core or core.16375 (max size 1 kB). To ensure a full core dump, try "ulimit -c unlimited" before starting Java again

CPU:total 1 (1 cores per cpu, 1 threads per core) family 6 model 45 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, avx, aes, clmul, tsc, tscinvbit, tscinv

Memory: 4k page, physical 2051532k(254660k free), swap 1051644k(1051324k free)

It's not clear from your post how much memory the VM itself has, but it seems like the VM has only 2GB of physical memory and 1GB of swap. 从您的帖子中还不清楚VM本身有多少内存,但是看来VM只有2GB的物理内存和1GB的交换空间。 If that be the case, you are going to really increase the memory of the VM. 如果真是这样,您将真正增加VM的内存。 Absolutely nothing less than 4GB of physical memory or you'll be lucky to get the Hadoop stack running and keep the OS happy at the same time. 绝对不少于4GB的物理内存,否则您将很幸运地能够运行Hadoop堆栈并同时保持操作系统满意。 Ideally, set each VM to about 8GB of RAM to ensure you have a few GB of RAM to throw at the MapReduce jobs. 理想情况下,将每个VM设置为大约8GB的RAM,以确保您有几个GB的RAM可以用于MapReduce作业。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 YARN上的Spark流-没有足够的内存供Java运行时环境继续 - Spark Streaming on YARN - There is insufficient memory for the Java Runtime Environment to continue 运行Hadoop:内存不足,无法继续Java Runtime Environment - Running Hadoop: insufficient memory for the Java Runtime Environment to continue 内存不足,无法运行Java Runtime环境 - Insufficient memory to run Java Runtime environment to continue 卓:内存不足,Java Runtime Environment无法继续 - AWS: There is insufficient memory for the Java Runtime Environment to continue Java 运行时环境的内存不足,无法继续 - insufficient memory for the Java Runtime Environment to continue 没有足够的内存让Java Runtime Environment继续运行 - There is insufficient memory for the Java Runtime Environment to continue Cloudera内存不足,无法继续Java Runtime Environment - Cloudera insufficient memory for the Java Runtime Environment to continue Eclipse中Java运行时环境的内存不足 - Insufficient memory for the Java Runtime Environment in Eclipse eclipse 中的“Java 运行时环境的 memory 不足”消息 - "insufficient memory for the Java Runtime Environment " message in eclipse Java Runtime Environment 内存不足,无法继续hbase - There is insufficient memory for the Java Runtime Environment to continue hbase
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM