簡體   English   中英

Hadoop-啟動YARN服務時Java Runtime Environment的內存不足

[英]Hadoop - Insufficient memory for the Java Runtime Environment while starting YARN service

我已經根據教程http://pingax.com/install-apache-hadoop-ubuntu-cluster-setup設置了集群(1-master和2-slaves(slave1,slave2))。 當我第一次運行時, HDFSYARN服務都可以正常運行。 但是當我再次停止運行它們時,我從主start-yarn.sh運行YARN服務( start-yarn.sh )時得到了以下內容。

# starting yarn daemons
# starting resourcemanager, logging to /local/hadoop/logs/yarn-dev-resourcemanager-login200.out
# 
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 168 bytes for AllocateHeap
# An error report file with more information is saved as: /local/hadoop/hs_err_pid21428.log

Compiler replay data is saved as: /local/hadoop/replay_pid21428.log
slave1: starting nodemanager, logging to /local/hadoop/logs/yarn-dev-nodemanager-login198.out
slave2: starting nodemanager, logging to /local/hadoop/logs/yarn-dev-nodemanager-login199.out
slave2: #
slave2: # There is insufficient memory for the Java Runtime Environment to continue.
slave2: # Native memory allocation (malloc) failed to allocate 168 bytes for AllocateHeap
slave2: # An error report file with more information is saved as:
slave2: # /local/hadoop/hs_err_pid27199.log
slave2: #
slave2: # Compiler replay data is saved as:
slave2: # /local/hadoop/replay_pid27199.log

根據Hadoop中內存 不足錯誤和運行mapreduce程序時出現的“ Java堆空間內存不足錯誤”的建議 ,我將3個文件~/.bashrcheap memory大小限制分別更改為256、512、1024和2048 ~/.bashrchadoop-env.shmapred-site.sh但沒有任何效果。

注意:我不是Linux或JVM的專家。

來自以下節點之一的日志文件內容:

# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 32784 bytes for Chunk::new
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (allocation.cpp:390), pid=16375, tid=0x00007f39a352c700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_102-b14) (build 1.8.0_102-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.102-b14 mixed mode linux-amd64 compressed oops)
# Core dump written. Default location: /local/hadoop/core or core.16375 (max size 1 kB). To ensure a full core dump, try "ulimit -c unlimited" before starting Java again

CPU:total 1 (1 cores per cpu, 1 threads per core) family 6 model 45 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, avx, aes, clmul, tsc, tscinvbit, tscinv

Memory: 4k page, physical 2051532k(254660k free), swap 1051644k(1051324k free)

從您的帖子中還不清楚VM本身有多少內存,但是看來VM只有2GB的物理內存和1GB的交換空間。 如果真是這樣,您將真正增加VM的內存。 絕對不少於4GB的物理內存,否則您將很幸運地能夠運行Hadoop堆棧並同時保持操作系統滿意。 理想情況下,將每個VM設置為大約8GB的RAM,以確保您有幾個GB的RAM可以用於MapReduce作業。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM