简体   繁体   English

奇怪的Java JVM内存不足

[英]Strange java jvm outofmemory

I have an application running under jboss5. 我有一个在jboss5下运行的应用程序。 It has a user login side as well as background quartz jobs. 它具有用户登录端以及后台石英作业。 One background job makes a soap call and pull down a large object to parse. 一项后台作业进行了紧急呼叫,并拉下一个大对象进行解析。 It uses a good amount of memory. 它占用了大量内存。

I see a pattern of when I get an OOM exception like this: 我看到这样的一种模式:何时出现如下OOM异常:

2013-06-04 21:44:36,855 ERROR [STDERR] (QuartzScheduler_Scheduler-NON_CLUSTERED_MisfireHandler) java.lang.OutOfMemoryError: Java heap space
2013-06-04 21:44:36,855 ERROR [STDERR] (http-0.0.0.0-80-9) Exception in thread "http-0.0.0.0-80-9" 
2013-06-04 21:44:36,855 ERROR [STDERR] (http-0.0.0.0-80-9) java.lang.OutOfMemoryError: Java heap space
2013-06-04 21:44:36,855 ERROR [STDERR] (Session Monitor) Exception in thread "Session Monitor" 
2013-06-04 21:44:36,855 ERROR [STDERR] (Monitor Runner) java.lang.OutOfMemoryError: Java heap space
2013-06-04 21:44:36,855 ERROR [STDERR] (Monitor Runner)     at java.util.Arrays.copyOf(Arrays.java:2219)
2013-06-04 21:44:36,855 ERROR [STDERR] (Monitor Runner)     at java.util.ArrayList.toArray(ArrayList.java:329)
2013-06-04 21:44:36,855 ERROR [STDERR] (Monitor Runner)     at java.util.ArrayList.<init>(ArrayList.java:151)
2013-06-04 21:44:36,855 ERROR [STDERR] (Monitor Runner)     at com.icesoft.util.MonitorRunner$1.run(MonitorRunner.java:54)

This job runs nightly. 这项工作每晚进行。 When nobody is using the UI for days, I'll get the OOM after a few days. 当几天没有人使用UI时,几天后我将获得OOM。 But when people use the UI application daily, I can go for over a month or more and never see the OOM problem. 但是,当人们每天使用UI应用程序时,我可以使用一个多月或更长时间,却再也看不到OOM问题。

It seems like there is something good that happens when the application is used, but I have no idea what. 使用该应用程序时似乎发生了一些好事,但是我不知道该怎么办。 Does anyone have an idea of where to start looking and what to try? 有谁知道从哪里开始寻找和尝试什么?

We're using jdk 1.7.0.11 and javaopts are 我们正在使用jdk 1.7.0.11,而javaopts是

set "JAVA_OPTS=-Xrs -Xms256M -Xmx4096M -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC "

Thank you, 谢谢,

Jim 吉姆

Use -XX:+HeapDumpOnOutOfMemoryError JVM argument then open the Heap Dump in MAT. 使用-XX:+HeapDumpOnOutOfMemoryError JVM参数,然后在MAT中打开堆转储。 MAT will show you the suspects of the OOM, you can even walk through the heap yourself and identify which objects occupy the most of the heap. MAT将向您显示OOM的可疑对象,您甚至可以亲自浏览堆并确定哪些对象占据了堆的大部分。 This way you can easily find the culprit of the problem. 这样,您就可以轻松找到问题的根源。

Also, it would help enabling GC logging so you can see if there are any patterns that lead to the OOM. 另外,这将有助于启用GC日志记录,因此您可以查看是否存在导致OOM的任何模式。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM