简体   繁体   中英

Sun JVM Committed Virtual Memory High Consumption

We have production Tomcat (6.0.18) server which runs with the following settings:

-server -Xms7000M -Xmx7000M -Xss128k -XX:+UseFastAccessorMethods 
-XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote.port=7009 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false -verbose:gc -XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps 
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djava.util.logging.config.file=/opt/apache-tomcat-6.0.18/conf/logging.properties 
-agentlib:jdwp=transport=dt_socket,address=8000,server=y,suspend=n 
-Djava.endorsed.dirs=/opt/apache-tomcat-6.0.18/endorsed 
-classpath :/opt/apache-tomcat-6.0.18/bin/bootstrap.jar

java version "1.6.0_12"
Java(TM) SE Runtime Environment (build 1.6.0_12-b04)
Java HotSpot(TM) 64-Bit Server VM (build 11.2-b01, mixed mode)

After some time of work we get (via JConsole) the following memory consumption:

Current heap size: 3 034 233 kbytes
Maximum heap size: 6 504 832 kbytes
Committed memory:  6 504 832 kbytes
Pending finalization: 0 objects
Garbage collector: Name = 'PS MarkSweep', Collections = 128, Total time spent = 16 minutes
Garbage collector: Name = 'PS Scavenge', Collections = 1 791, Total time spent = 17 minutes

Operating System: Linux 2.6.26-2-amd64
Architecture: amd64
Number of processors: 2

Committed virtual memory: 9 148 856 kbytes
Total physical memory:  8 199 684 kbytes
Free physical memory:     48 060 kbytes
Total swap space: 19 800 072 kbytes
Free swap space: 15 910 212 kbytes

The question is why do we have a lot of committed virtual memory? Note that max heap size is ~7Gb (as expected since Xmx=7G).

top shows the following:

31413 root  18  -2 8970m 7.1g  39m S   90 90.3 351:17.87 java

Why does JVM need additional 2Gb! of virtual memory? Can I get non-heap memory disrtibution just like in JRockit http://blogs.oracle.com/jrockit/2009/02/why_is_my_jvm_process_larger_t.html ?

Edit 1: Perm is 36M.

-Xms7000M -Xmx7000M

That to me is saying to the JVM "allocate 7gb as an initial heap size with a maximum of 7gb".

So the process will always be 7gb to the OS as that's what the JVM has asked for via the Xms flag. What it's actually using internal to the JVM is what is being reported as the heap size of a few hundred mb. Normally you set a high Xms when you are preventing slowdowns due to excessive garbage collection. When the JVM hits a (JVM defined) percentage of memory in use it'll do a quick garbage collection. if this fails to free up memory then it'll try a detaillled collection. Finally, if this fails and the max memory defined by Xmx hasn't been reached then it'll ask the OS for more memory. All this takes time and can really notice on a production server - doing this in advance saves this from happening.

I'm not familiar with jconsole, but are you sure the JVM is using the extra 2Gb? It looks to me like it's the OS or other processes that bring the total up to 9Gb.

Also, a common explanation for a JVM using significantly more virtual memory than the -Xmx param allows is that you have memory-mapped-files (MappedByteBuffer) or use a library that uses MappedByteBuffer.

您可能想要尝试将JConsole连接到JVM并查看内存分配...也许您的Perm空间占用了这额外的2GB ...堆只是您的VM需要保持活动状态的一部分...

Seems that this problem was caused by a very high number of page faults JVM had. Most likely when Sun's JVM experiences a lot of page faults it starts to allocate additional virtual memory (still don't know why) which may in turn increase IO pressure even more and so on. As a result we got a very high virtual memory consumption and periodical hangs (up to 30 minutes) on full GC.

Three things helped us to get stable work in production:

  1. Decreasing tendency of the Linux kernel to swap (for description see here What Is the Linux Kernel Parameter vm.swappiness? ) helped a lot. We have vm.swappiness=20 on all Linux servers which run heavy background JVM tasks.

  2. Decrease maximum heap size value (-Xmx) to prevent excessive pressure on OS itself. We have 9GB value on 12GB machines now.

  3. And the last but very important - code profiling and memory allocations bottlenecks optimizations to eliminate allocation bursts as much as possible.

That's all. Now servers work very well.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM