简体   繁体   English

Sun JVM承诺的虚拟内存高消耗

[英]Sun JVM Committed Virtual Memory High Consumption

We have production Tomcat (6.0.18) server which runs with the following settings: 我们有运行以下设置的生产Tomcat(6.0.18)服务器:

-server -Xms7000M -Xmx7000M -Xss128k -XX:+UseFastAccessorMethods 
-XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote.port=7009 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false -verbose:gc -XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps 
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djava.util.logging.config.file=/opt/apache-tomcat-6.0.18/conf/logging.properties 
-agentlib:jdwp=transport=dt_socket,address=8000,server=y,suspend=n 
-Djava.endorsed.dirs=/opt/apache-tomcat-6.0.18/endorsed 
-classpath :/opt/apache-tomcat-6.0.18/bin/bootstrap.jar

java version "1.6.0_12"
Java(TM) SE Runtime Environment (build 1.6.0_12-b04)
Java HotSpot(TM) 64-Bit Server VM (build 11.2-b01, mixed mode)

After some time of work we get (via JConsole) the following memory consumption: 经过一段时间的工作,我们(通过JConsole)获得了以下内存消耗:

Current heap size: 3 034 233 kbytes
Maximum heap size: 6 504 832 kbytes
Committed memory:  6 504 832 kbytes
Pending finalization: 0 objects
Garbage collector: Name = 'PS MarkSweep', Collections = 128, Total time spent = 16 minutes
Garbage collector: Name = 'PS Scavenge', Collections = 1 791, Total time spent = 17 minutes

Operating System: Linux 2.6.26-2-amd64
Architecture: amd64
Number of processors: 2

Committed virtual memory: 9 148 856 kbytes
Total physical memory:  8 199 684 kbytes
Free physical memory:     48 060 kbytes
Total swap space: 19 800 072 kbytes
Free swap space: 15 910 212 kbytes

The question is why do we have a lot of committed virtual memory? 问题是为什么我们有很多已提交的虚拟内存? Note that max heap size is ~7Gb (as expected since Xmx=7G). 请注意,最大堆大小为〜7Gb(由于Xmx = 7G,这是预期的)。

top shows the following: 顶部显示以下内容:

31413 root  18  -2 8970m 7.1g  39m S   90 90.3 351:17.87 java

Why does JVM need additional 2Gb! JVM为什么需要额外的2Gb! of virtual memory? 虚拟内存? Can I get non-heap memory disrtibution just like in JRockit http://blogs.oracle.com/jrockit/2009/02/why_is_my_jvm_process_larger_t.html ? 我可以像在JRockit http://blogs.oracle.com/jrockit/2009/02/why_is_my_jvm_process_larger_t.html中一样获得非堆内存分配吗?

Edit 1: Perm is 36M. 编辑1:烫发是36M。

-Xms7000M -Xmx7000M -Xms7000M -Xmx7000M

That to me is saying to the JVM "allocate 7gb as an initial heap size with a maximum of 7gb". 我对JVM说的是“将7gb作为初始堆大小分配,最大为7gb”。

So the process will always be 7gb to the OS as that's what the JVM has asked for via the Xms flag. 因此,对于OS,该进程始终为7gb,这是JVM通过Xms标志要求的。 What it's actually using internal to the JVM is what is being reported as the heap size of a few hundred mb. 它实际上在JVM内部使用的是报告为几百mb的堆大小。 Normally you set a high Xms when you are preventing slowdowns due to excessive garbage collection. 通常,在防止由于过多垃圾收集而导致的速度降低时,请设置较高的Xms。 When the JVM hits a (JVM defined) percentage of memory in use it'll do a quick garbage collection. 当JVM达到使用中的JVM百分比时,它将进行快速垃圾回收。 if this fails to free up memory then it'll try a detaillled collection. 如果这样做无法释放内存,则会尝试进行详细的收集。 Finally, if this fails and the max memory defined by Xmx hasn't been reached then it'll ask the OS for more memory. 最后,如果失败,并且尚未达到Xmx定义的最大内存,则会向操作系统请求更多内存。 All this takes time and can really notice on a production server - doing this in advance saves this from happening. 所有这一切都需要时间,并且可以在生产服务器上真正注意到-预先执行此操作可以避免这种情况的发生。

I'm not familiar with jconsole, but are you sure the JVM is using the extra 2Gb? 我对jconsole不熟悉,但是您确定JVM使用的是额外的2Gb吗? It looks to me like it's the OS or other processes that bring the total up to 9Gb. 在我看来,是因为操作系统或其他进程使总容量达到9Gb。

Also, a common explanation for a JVM using significantly more virtual memory than the -Xmx param allows is that you have memory-mapped-files (MappedByteBuffer) or use a library that uses MappedByteBuffer. 此外,对于使用比-Xmx参数更大的虚拟内存的JVM的常见解释是,您具有内存映射文件(MappedByteBuffer)或使用使用MappedByteBuffer的库。

您可能想要尝试将JConsole连接到JVM并查看内存分配...也许您的Perm空间占用了这额外的2GB ...堆只是您的VM需要保持活动状态的一部分...

Seems that this problem was caused by a very high number of page faults JVM had. 似乎此问题是由JVM的大量页面错误引起的。 Most likely when Sun's JVM experiences a lot of page faults it starts to allocate additional virtual memory (still don't know why) which may in turn increase IO pressure even more and so on. 当Sun的JVM遇到很多页面错误时,它很可能开始分配额外的虚拟内存(仍然不知道为什么),这可能反过来进一步增加IO压力,依此类推。 As a result we got a very high virtual memory consumption and periodical hangs (up to 30 minutes) on full GC. 结果,我们的虚拟内存消耗很高,并且在完全GC上会定期挂起(最多30分钟)。

Three things helped us to get stable work in production: 三件事帮助我们在生产中获得了稳定的工作:

  1. Decreasing tendency of the Linux kernel to swap (for description see here What Is the Linux Kernel Parameter vm.swappiness? ) helped a lot. Linux内核交换趋势的降低(有关详细信息,请参见此处,Linux内核参数vm.swappiness是什么?很有帮助 We have vm.swappiness=20 on all Linux servers which run heavy background JVM tasks. 我们在所有运行大量后台JVM任务的Linux服务器上均具有vm.swappiness=20

  2. Decrease maximum heap size value (-Xmx) to prevent excessive pressure on OS itself. 减小最大堆大小值(-Xmx),以防止对OS本身施加过多压力。 We have 9GB value on 12GB machines now. 现在,我们在12GB的计算机上拥有9GB的价值。

  3. And the last but very important - code profiling and memory allocations bottlenecks optimizations to eliminate allocation bursts as much as possible. 最后也是非常重要的-代码分析和内存分配瓶颈优化,以尽可能消除分配突发。

That's all. 就这样。 Now servers work very well. 现在服务器运行良好。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM