简体   繁体   English

HornetQ集群队列和故障节点:消息丢失了吗?

[英]HornetQ clustered queue and failing node: are messages lost?

I'm facing a design issue in which I would like to have only one JMS producer sending messages to two consumers. 我正面临一个设计问题,在这个问题中,我只希望一个JMS生产者向两个消费者发送消息。 There are only two servers, and the producer will start generating messages that will be load balanced (with round robin) to both consumers. 只有两台服务器,生产者将开始生成消息,这些消息将被负载均衡(使用轮询)给两个使用者。

In the hypothetical case of one server failing, I do have a mechanism so a new producer will be activated in the remaining server. 在一个服务器发生故障的假设情况下,我确实具有一种机制,因此将在其余服务器中激活一个新的生产者。 But what will happen to the messages that were being processed in the server that went down? 但是,发生故障的服务器中正在处理的消息将如何处理?

Will they be reassigned to the remaining server thus being processed by the remaining consumer? 是否会将它们重新分配给其余服务器,以便由其余使用者处理? or they will be lost? 否则他们会迷路?

If the last case is true there will be another problem. 如果最后一种情况是正确的,那么将会有另一个问题。 The producer creates messages based on files in a NAS so when a server goes down, the newly activated producer will start creating messages based on the contents of the NAS and that may replicate messages (but that case is handled) the problem is that if the server that goes down is not the server with the active producer then when the server goes up again it will not have messages to consume and also no messages will replace the ones lost. 生产者会根据NAS中的文件创建消息,因此,当服务器关闭时,新激活的生产者将根据NAS的内容开始创建消息,并且该消息可能会复制消息(但这种情况已得到解决),问题是发生故障的服务器不是具有活动生产者的服务器,那么当服务器再次发生故障时,它将没有消息可以使用,也没有消息可以替代丢失的消息。

How can I achieve a design so that no messages are lost? 如何实现设计,以确保不会丢失任何消息?

Note: When one server goes down, the journal and bindings are lost. 注意:当一台服务器出现故障时,日志和绑定将丢失。

Once the message is transferred to a particular node it belongs to that node. 一旦消息被传输到特定节点,它便属于该节点。

If a node goes down, you would have to activate that node with its journal and the message state would be recovered from disk. 如果某个节点发生故障,则必须使用该节点的日记帐激活该节点,并且消息状态将从磁盘中恢复。 You could eventually have messages being redistributed if you don't have more consumers (that will depend on redistribution configuration of course). 如果您没有更多的使用者,您最终可能会重新分配消息(这当然取决于重新分配的配置)。

Or the best approach would be to have a backup node for each node. 或最好的方法是为每个节点都有一个备份节点。

We have been advising the use of collocated topologies, where one VM has an active instance and a backup instance for the other Server... That way each alive server would also have a backup config. 我们一直建议使用并置拓扑,其中一个VM具有一个活动实例,另一个VM具有另一个Server的备份实例。这样,每个活动的服务器也将具有备份配置。 That's being improved on 2.4.0 as we speak as you need a lot of manual configuration at the moment. 我们在2.4.0上对此进行了改进,因为您目前需要大量的手动配置。

So, in summary either: 因此,总而言之:

  • Restart the node 重新启动节点
  • configure backup nodes 配置备份节点

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM