简体   繁体   中英

RabbitMQ queue consumption behaviour

We have a Spring Boot (2.1) application using Apache Camel (2.24) to consume from a RabbitMQ server (3.7.15).

The application appears to be consuming correctly (message-by-message, as they are placed on the queue), but in the RabbitMQ monitor it appears as those the messages are consumed 'in bulk' (see the sharp drop then flatline, even though we see in the logs that messages are being processed by the app).

We haven't set any sort of 'prefetch' behaviour that I can see. Can someone explain what's happening? Why isn't the queue count decreasing smoothly?

rabbitmq 管理控制台

Well, it simply looks like the default prefetch value is unlimited . If you want to limit it, you have to explicitly configure it.

I didn't found an official source confirming this impression, but at least an article that does: https://www.cloudamqp.com/blog/2017-12-29-part1-rabbitmq-best-practice.html#prefetch

RabbitMQ default prefetch setting gives clients an unlimited buffer, meaning that RabbitMQ by default send as many messages as it can to any consumer that looks ready to accept them.

The Camel component has an option prefetchEnabled that is by default false . However, when I look at the RabbitConsumer class of the Camel component in the method openChannel , this just means that the consumer does not explicitly set prefetch values .

A consumer without prefetch settings is not necessarily a consumer with prefetch disabled, it is a consumer that does not care about prefetch (and therefore gets a default that is defined somewhere else ).

If I have not overlooked something the Camel option prefetchEnabled has not a good name. It should be called limitPrefetch . That would also match with the RabbitMQ docs :

... specifies the basic.qos method to make it possible to limit the number of unacknowledged messages on a channel (or connection)

Conclusion: I suspect that if you want a prefetch limit with the Camel component you have to set prefetchEnabled as well as the other prefetch options . Otherwise there is no limit (what basically makes sense since this gives you maximum throughput).

from("rabbitmq://localhost:5672/delete.Tenant?queue=tenant&declare=false&autoAck=false&threadPoolSize=20&concurrentConsumers=20&prefetchEnabled=true&prefetchCount=100")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM