简体   繁体   中英

How to deal with JVM OutOfMemoryError on Linux?

Redhat Enterprise Linux 5.4 32-bit + Sun HotSpot JVM 6u5 32-bit + JVM settings -Xms1536m -Xmx2048m -XX:PermSize=128m -XX:MaxPermSize=512m .

The JVM crashes with the following errors:

java.lang.OutOfMemoryError: requested 828752 bytes for Chunk::new. Out of swap space? Internal Error (allocation.cpp:218), pid=21557, tid=329534352 Error: Chunk::new

java.lang.OutOfMemoryError: requested 383504 bytes for GrET in /BUILD_AREA/jdk6_05/hotspot/src/share/vm/utilities/growableArray.cpp. Out of swap space? Internal Error (allocation.inline.hpp:42), pid=16927, tid=334281616 Error: GrET in /BUILD_AREA/jdk6_05/hotspot/src/share/vm/utilities/growableArray.cpp

java.lang.OutOfMemoryError: requested 256000 bytes for GrET in /BUILD_AREA/jdk6_05/hotspot/src/share/vm/utilities/growableArray.cpp. Out of swap space? Internal Error (allocation.inline.hpp:42), pid=16863, tid=334216080 Error: GrET in /BUILD_AREA/jdk6_05/hotspot/src/share/vm/utilities/growableArray.cpp ..........

It may be the JVM itself C/C++ core memory leak, the C/C++ memory usage reaching the JVM critical value, or the platform swap space is insufficient.

How to deal with JVM itself C/C++ core memory leak?
Valgrind v3.7 cannot work with the hotspot JVM 6u5.

The JVM crash here is just mislaeading. The problem is that the process runs out of address space. Your "-Xmx2048m" is simply too big for the current available virtual memory and/or specific 32bit O/S in general.

Under 32-bit Windows any process can only address up to ~1.6GB RAM. Others OS depends. Linux should be able to use ~3GB max.

On top of your object heap size (-Xmx) the JVM needs some further RAM for stack, object management, gc structures, etc. In practice this leads on 32-bit Windows systems to a maximum heap size around 1100MB.

For more details about the process memory size limit see for example tis blog post: https://sinewalker.wordpress.com/2007/03/04/32-bit-windows-and-jvm-virtual-memory-limit

It clearly says, that your JVM could not allocate further memory. The available (physical) memory is 0 so your system is using the swap. Then at some point, the swap will full of swapped memory pages and further request for memory (alloc) will therefore fail.

Test the usage of your swap with

$ swapon -s 
Filename                Type        Size    Used    Priority
/dev/xvda2              partition   8386556 99312   -1

You can increase the swap size at any time. See this link

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM