简体   繁体   English

我的代码中的内存不足异常

[英]Out of memory exception in my code

I am running a code for long hours as part of a stress test on an Oracle db and using java version "1.4.2". 我正在运行代码长时间作为Oracle数据库压力测试的一部分并使用java版本“1.4.2”。 In a nutshell, what I am doing is : 简而言之,我所做的是:

while(true)
{
    Allocating some memory as a blob
    byte[] data = new byte[1000];
    stmt = fConnection.prepareStatement(query); // [compiling an insert query which uses the above blob]
    stmt.execute();  // I insert this blob-row in the database. 
stmt.close();

}

Now I want to run this test for 8-10 hrs. 现在我想运行这个测试8-10小时。 However apparently after inserting about 15million records I hit the java.lang.OutOfMemoryError 然而,显然在插入大约1500万条记录后,我点击了java.lang.OutOfMemoryError

I am running this with -Xms512m -Xmx2g . 我用-Xms512m -Xmx2g运行它。 I tried using higher values but I dont seem to have that much hardware neither do I think it is req: 我尝试使用更高的值,但我似乎没有那么多的硬件,我认为它不是req:

    java -Xms512m -Xmx4g -jar XX.jar
    Invalid maximum heap size: -Xmx4g
    The specified size exceeds the maximum representable size.
    Could not create the Java virtual machine.
    java -Xms2g -Xmx3g -jar XX.jar
    Error occurred during initialization of VM
    Could not reserve enough space for object heap
    Could not create the Java virtual machine.

I run this as a multithreaded program. 我将其作为多线程程序运行。 So arnd 10 threads are doing the inserts. 所以arnd 10个线程正在进行插入。

Is there any way I can get around this program in possibly no-hack manner. 有没有什么方法可以不用黑客的方式绕过这个程序。 I mean what if I decide to run this for 15-20 hrs instead of 8-10 hrs. 我的意思是,如果我决定运行15-20小时而不是8-10小时。

EDIT: added stmt.close since I am using that in my code already. 编辑:添加stmt.close,因为我已经在我的代码中使用它。 some changes based on comments 基于评论的一些变化

Thanks 谢谢

PS: sorry cant post the code bec of NDA PS:抱歉,不能发布NDA的代码

Basically, I think you are barking up the wrong tree: 基本上,我认为你正在咆哮错误的树:

  • The JVM / GC will manage to deallocate unreachable objects, no matter how fast you allocate them. 无论分配速度有多快,JVM / GC 都会设法释放无法访问的对象。 If you are running the classic non-concurrent GC, then JVM will simply stop doing other things until the GC has deallocated memory. 如果您正在运行经典的非并发GC,那么在GC释放内存之前,JVM将停止执行其他操作。 If you configured your JVM to use a concurrent GC, it will try to run the GC and normal worker threads at the same time ... and revert to "stop everything and collect" behaviour if it cannot keep up. 如果您将JVM配置为使用并发GC,它将尝试同时运行GC和正常工作线程...并且如果无法跟上则恢复为“停止所有并收集”行为。

  • If you are running out of memory, it is because something in your application (or the libraries / drivers it is using) is leaking memory. 如果内存不足,那是因为应用程序中的某些东西(或它正在使用的库/驱动程序)正在泄漏内存。 In other words, something is causing objects to remain reachable, even though your application doesn't need them any more. 换句话说,即使您的应用程序不再需要它,某些东西也会导致对象保持可访问状态。

As comments have pointed out, you need to address this problem methodically using a memory profiler / heap dump. 正如评论所指出的,您需要使用内存分析器/堆转储有条不紊地解决此问题。 Randomly changing things or blaming it on the GC is highly unlikely to fix the problem. 随机更改内容或将其归咎于GC极不可能解决问题。

(When you say "... I did use stmt.close() all the time" , I assume that this means that your code looks something like this: (当你说“......我确实一直使用stmt.close()”时 ,我认为这意味着你的代码看起来像这样:

    PreparedStatement stmt = ... 
    try {
        stmt.execute();
        // ...
    } finally {
        stmt.close();
    }

If you don't put the close call in a finally then it is possible that you are NOT calling close every time. 如果你不把close通话的finally则是可能的,你是不是叫close每一次。 In particular, if some exception gets thrown during the execute call or between it and the close call, then it is possible that close will not get called ... and that will result in a leak.) 特别是,如果在execute调用期间或在它与close调用之间抛出某些异常,则可能无法调用close ...这将导致泄漏。)

This execution causes from OracleConnection NativeMemory. 此执行由OracleConnection NativeMemory引起。 For NIO operations oracle jdbc guys decided to use native part of the memory. 对于NIO操作oracle jdbc的人决定使用本机部分内存。 Most probably after executing this query too frequently makes your application to dump. 很可能在执行此查询后过于频繁地使您的应用程序转储。 To get rid of this, you can increase cache size of jdbc or restart your application in time intervals 要摆脱这种情况,您可以增加jdbc的缓存大小或按时间间隔重新启动应用程序

I think you should add 我想你应该补充一下

stmt.close();

so the memory allocated to the preparedStatement will be freed. 所以分配给preparedStatement的内存将被释放。

If there is a leak, either in your code or a library, the Memory Analyser (MAT) is a free Eclipse based app for delving into Java memory dump files. 如果您的代码或库中存在泄漏,则Memory Analyzer(MAT)是一个免费的基于Eclipse的应用程序,用于深入研究Java内存转储文件。 Instructions include how to get it to drop the dump file for you. 说明包括如何让它为您删除转储文件。 http://www.eclipse.org/mat/ http://www.eclipse.org/mat/

java -Xms2g -Xmx3 -jar XX.jar
Error occurred during initialization of VM
Incompatible minimum and maximum heap sizes specified

Try 尝试

java -Xms2g -Xmx3g -jar XX.jar

How much memory do you have on your box? 你的盒子里有多少记忆? Are you running a 32-bit or 64-bit JVM? 您运行的是32位还是64位JVM?

Edit: seems that it may be a known Oracle driver issue: http://www.theserverside.com/discussions/thread.tss?thread_id=10218 编辑:似乎它可能是已知的Oracle驱动程序问题: http//www.theserverside.com/discussions/thread.tss?thread_id = 10218


Just a longshot, I know you are doing plain JDBC here, but if you hapen have any enhancers (AspectJ, Hibernate, JPA) there is a (slight) chance of a Perm gen leak, set -XX:MaxPermGen=256m just to be on the safe side 只是一个远景,我知道你在这里做了简单的JDBC,但如果你有任何增强器(AspectJ,Hibernate,JPA),有一个(轻微的)Perm gen泄漏的机会,设置-XX:MaxPermGen=256m只是为了在安全方面

Also jvisualvm memory profiler and jprofiler (you can use the trial) will pin point it faster 此外,jvisualvm内存分析器和jprofiler(您可以使用试用版)将更快地指出它

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM