简体   繁体   中英

Under what circumstances does Java performance degrade with more memory?

We're load testing a Java 1.6 application in our DEV environment. The JVM heap allocation is 2Gb, -Xms2048m -Xmx2048m. Under load testing, the app runs smooth, never uses more than 1.25Gb of heap, and garbage collection is totally normal.

In our UAT environment, we run the load test with the same parameters, the only difference is the JVM, it's allocated 4Gb, -Xms4096m -Xmx4096m, otherwise, the hardware is exactly the same with DEV. But during load testing, the performance is horrendous, the app eats up nearly the entire heap, and garbage collection runs rampant.

We've run these tests over and over again, eliminated all possible symptoms that may influence performance, but the results are the same. Under what circumstances can this be the case?

There is something different about your application in the Production and UAT environments.

Judging from the symptoms, it is (IMO) unlikely to be a hardware, operating system performance tuning or a difference in the JVM versions. It goes without saying that this is unlikely to be due to the application having more memory.

(It is not inconceivable that your application might do something strange ... like sizing some data structures based on the maximum heap size and get the calculations wrong. But I think you'd be aware of that possibility, so lets ignore it for now.)

It is probably related to a difference in the OS environment; eg a different version of the OS or some application, differences in the networking, differences in locales, etcetera. But the bottom line is that it is 99% certain that there is a memory leak in your application when run on the UAT, and that memory leak is what is chewing up heap memory and overloading the GC.

My advice would be to treat this as a storage leak problem, and use the standard tools / techniques to track down the cause of the problem. In the process, you will most likely be able to figure out why this only occurs on your UAT.

The culprit could be garbage collection, normal "stop-the-world"-type collection caused us some performance problems; the server-software was running very slow, yet the load of the server was also low. Eventually we found out that there was a single "stop-the-world" -garbage collector thread holding up the entire software being run all the time under certain scenarios (operations producing loads of garbage).

Moving to concurrent garbage collection alleviated the problem with start up parameters -XX:+UseParallelOldGC -XX:ParallelGCThreads=8 . We were using "only" 2gb heaps in tests and production, but it is also worthy of noting that the amount of time the GC takes goes up with larger heap (even if your software never actually uses all of it).

You might want to read more about different garbage collector -options and tuning from here: Java SE 6 HotSpot[tm] Virtual Machine Garbage Collection Tuning .

Also, answers in this question could provide some help: Java very large heap sizes .

It will be worth while to analyze the heap dumps on both these machines and understand what is consuming the heap differently on these 2 environments. Histograms will help.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM