简体   繁体   中英

What reserves and holds my Java memory when sending out AMQP events from my Spring Boot app?

My Spring Boot application provides some features that perform an action and then send out AMQP events. One of these seems to suffer from a memory leak. Triggering it causes the app's memory consumption to rise and not come down for several hours.

I am a bit clueless as to what reserves and holds this memory. My own threads look harmless enough. They do process data (sometimes inefficiently), but they simply pass it on from method to method, without storing it somewhere (object/static attributes).

My Dynatrace memory chart looks like this. The hierarchy shows that the leftmost side reserves and holds ~7 GB of main memory (survivor space).

在此处输入图像描述

There are several entries there that I cannot fathom. I am not aware of actively using Netty, yet it shows up here, so probably an indirect dependency from Spring or JMS. At the bottommost layer, I find things like ByteBufferUtils - does Netty buffer the events I am sending out? Above, I find ByteToMessageDecoder.channelRead - why "read" if I am in a sending process? Further up we find SslHandler.unwrap - SSL is okay, but why should SSL interaction store larger amounts of data?

Can somebody help shed light on this? Maybe had a similar situation? Or ideas what next steps I could take to analyze this further?

Our JMS client dependencies are:

  • javax.jms:javax.jms-api:jar:2.0.1
  • org.apache.qpid:qpid-jms-client:jar:0.61.0

We could solve this issue. It was not caused by Netty or JMS at all. Our basis team replaced the garbage collector GC1 in our Virtual Machine with the alternative SerialGC.

That garbage collector is much slower in releasing unused memory, hence the stark increase in memory usage. At the same time, a suboptimal configuration caused it to allow memory allocations up to 100%. This left the application zero headroom to deal with recurring side tasks, such that these ran into out of memory situations.

We now forced the garbage collector algorithm by running the Java option -XX:+UseSerialGC .

Netty and JMS appear in our Dynatrace chart so prominently because they happen to be the most memory-hungry processes. (Which is okay, because they deal with the main load of the app.) They also hold on to their memory until the messages are sent out, which may mean that their objects survive at least one random garbage collector inspection - hence the slightly misleading classification into the "survivor" space.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM