简体   繁体   中英

Out of memory exception in my code

I am running a code for long hours as part of a stress test on an Oracle db and using java version "1.4.2". In a nutshell, what I am doing is :

while(true)
{
    Allocating some memory as a blob
    byte[] data = new byte[1000];
    stmt = fConnection.prepareStatement(query); // [compiling an insert query which uses the above blob]
    stmt.execute();  // I insert this blob-row in the database. 
stmt.close();

}

Now I want to run this test for 8-10 hrs. However apparently after inserting about 15million records I hit the java.lang.OutOfMemoryError

I am running this with -Xms512m -Xmx2g . I tried using higher values but I dont seem to have that much hardware neither do I think it is req:

    java -Xms512m -Xmx4g -jar XX.jar
    Invalid maximum heap size: -Xmx4g
    The specified size exceeds the maximum representable size.
    Could not create the Java virtual machine.
    java -Xms2g -Xmx3g -jar XX.jar
    Error occurred during initialization of VM
    Could not reserve enough space for object heap
    Could not create the Java virtual machine.

I run this as a multithreaded program. So arnd 10 threads are doing the inserts.

Is there any way I can get around this program in possibly no-hack manner. I mean what if I decide to run this for 15-20 hrs instead of 8-10 hrs.

EDIT: added stmt.close since I am using that in my code already. some changes based on comments

Thanks

PS: sorry cant post the code bec of NDA

Basically, I think you are barking up the wrong tree:

  • The JVM / GC will manage to deallocate unreachable objects, no matter how fast you allocate them. If you are running the classic non-concurrent GC, then JVM will simply stop doing other things until the GC has deallocated memory. If you configured your JVM to use a concurrent GC, it will try to run the GC and normal worker threads at the same time ... and revert to "stop everything and collect" behaviour if it cannot keep up.

  • If you are running out of memory, it is because something in your application (or the libraries / drivers it is using) is leaking memory. In other words, something is causing objects to remain reachable, even though your application doesn't need them any more.

As comments have pointed out, you need to address this problem methodically using a memory profiler / heap dump. Randomly changing things or blaming it on the GC is highly unlikely to fix the problem.

(When you say "... I did use stmt.close() all the time" , I assume that this means that your code looks something like this:

    PreparedStatement stmt = ... 
    try {
        stmt.execute();
        // ...
    } finally {
        stmt.close();
    }

If you don't put the close call in a finally then it is possible that you are NOT calling close every time. In particular, if some exception gets thrown during the execute call or between it and the close call, then it is possible that close will not get called ... and that will result in a leak.)

This execution causes from OracleConnection NativeMemory. For NIO operations oracle jdbc guys decided to use native part of the memory. Most probably after executing this query too frequently makes your application to dump. To get rid of this, you can increase cache size of jdbc or restart your application in time intervals

I think you should add

stmt.close();

so the memory allocated to the preparedStatement will be freed.

If there is a leak, either in your code or a library, the Memory Analyser (MAT) is a free Eclipse based app for delving into Java memory dump files. Instructions include how to get it to drop the dump file for you. http://www.eclipse.org/mat/

java -Xms2g -Xmx3 -jar XX.jar
Error occurred during initialization of VM
Incompatible minimum and maximum heap sizes specified

Try

java -Xms2g -Xmx3g -jar XX.jar

How much memory do you have on your box? Are you running a 32-bit or 64-bit JVM?

Edit: seems that it may be a known Oracle driver issue: http://www.theserverside.com/discussions/thread.tss?thread_id=10218


Just a longshot, I know you are doing plain JDBC here, but if you hapen have any enhancers (AspectJ, Hibernate, JPA) there is a (slight) chance of a Perm gen leak, set -XX:MaxPermGen=256m just to be on the safe side

Also jvisualvm memory profiler and jprofiler (you can use the trial) will pin point it faster

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM