简体   繁体   中英

How spring-cloud-stream-rabbit-binder works when RabbitMQ disk or memory alarm is activated?

Versions:

Spring-cloud-stream-starter-rabbit --> 2.1.0.RELEASE

RabbitMQ --> 3.7.7

Erlang --> 21.1


(1) I have created a sample mq-publisher-demo & mq-subscriber-demo repositories on github for reference.

When Memory Alarm was activated

Publisher : was able to publish messages.

Subscriber : seems like, the subscriber was receiving messages in batch with few delays.

When Disk Alarm was activated

Publisher : was able to publish messages.

Subscriber : seems like, the subscriber was not receiving messages while Disk Alarm was activated. but once the alarm was deactivated, all messages were received by the subscriber.

Are the messages getting buffered somewhere?

Is this the expected behavior? (because I was expecting RabbitMQ will stop receiving messages from the publisher and the subscriber will never get any subsequent messages once any of the alarms are activated.)

(2) Spring Cloud Stream document says below. Does it mean the above behaviour? (avoiding deadlock & keep publisher publishing the messages)

Starting with version 2.0, the RabbitMessageChannelBinder sets the RabbitTemplate.userPublisherConnection property to true so that the non-transactional producers avoid deadlocks on consumers, which can happen if cached connections are blocked because of a memory alarm on the broker.

(3) Do we have something similar for Disk alarm also to avoid deadlocks?

(4) If the producer's message will not be accepted by RabbitMQ, then is it possible to throw specific exception to the publisher from spring-cloud-stream (saying alarms are activated and message publish failed)?

I'm kind of new about these alarms in spring-cloud-stream, please help me to understand clearly. Thanking you.

Are the messages getting buffered somewhere?

Yes, when resource alarm is set, messages will be published into network buffers.

  • Tiny messages will take some time to fill up Network buffer and then block publishers.
  • Less network buffer size will block publishers soon.

It's better to ask questions about the behavior of RabbitMQ itself (and the Java client that Spring uses) on the rabbitmq-users Google group; that's where the RabbitMQ engineers hang out.

(2) Spring Cloud Stream document says below. Does it mean the above behaviour?

That change was made so that if producers are blocked from producing, consumers can still consume.

(4) If the producer's message will not be accepted by RabbitMQ, then is it possible to throw specific exception to the publisher from spring-cloud-stream (saying alarms are activated and message publish failed)?

Publishing is asynchronous by default; you can enable transactions (which can slow down performance a lot; or enable errors on the producer and you'll get an asynchronous message on the error channel if you enable publisher confirms and returns.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM