简体   繁体   English

如何防止一个主题的多个消费者同时对一条消息采取行动

[英]How to prevent multiple consumers of a topic from acting on a message at the same time

I searched for a good pattern to implement here, and couldn't find anything.我在这里搜索了一个很好的模式来实现,但找不到任何东西。

First, I have multiple nodes in a cluster subscribing to a topic.首先,我在订阅一个主题的集群中有多个节点。 Because I am interfacing with an external API, I cannot change this topic to a queue (which would solve my problems).因为我正在与外部 API 交互,所以我无法将此主题更改为队列(这将解决我的问题)。 When a message goes into this topic, the subscribers react, but I need to ensure that only one subscriber actually does any work.当一条消息进入这个主题时,订阅者会做出反应,但我需要确保只有一个订阅者真正做任何工作。

I have multiple nodes for durability and for scalability.我有多个节点用于持久性和可扩展性。 I thought about just electing a master node, but over time there will be multiple topics, and I do not want to make only one node responsible for all messages all the time.我想过只选一个master节点,但是随着时间的推移会出现多个topic,我不想让一个节点一直负责所有的消息。 Hazelcast is not a requirement here. Hazelcast 在这里不是必需的。

@Named
public class MessageProcessorImpl
    implements MessageProcessor
{

  HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();

  private final Lock lock;

  private final List<Message> messageListCache;

  private final IAtomicLong cachePositionCounter;

  private final Long maximumRecentlyProcessedCachedSize = 10L;

  private static final Logger logger = LoggerFactory.getLogger(MessageProcessorImpl.class);

  private final ExternalMessageService externalMessageService;

  @Inject
  public MessageProcessorImpl(final ExternalMessageService externalMessageService)
  {
    lock = hazelcastInstance.getLock("test-lock");
    messageListCache = hazelcastInstance.getList("test-list");
    cachePositionCounter = hazelcastInstance.getAtomicLong("test-atomic-long");

    this.externalMessageService = externalMessageService;
  }

  @Override
  public void processMessage(final Message message) {
    try {
      logger.trace("Acquiring lock");
      lock.lock();
      if (!messageListCache.contains(message)) {

        Long currentIndex = cachePositionCounter.getAndIncrement();
        if (currentIndex >= maximumRecentlyProcessedCachedSize) {
          currentIndex = 0L;
          cachePositionCounter.set(currentIndex);
        }

        messageListCache.add(toIntExact(currentIndex), message);

        externalMessageService.doSomething(message);
      }
    }
    finally {
      logger.trace("releasing lock");
      lock.unlock();
    }
  }
}

As you can see, I am using a list of recently processed message to prevent duplicate work.如您所见,我正在使用最近处理的消息列表来防止重复工作。 The problem here is obvious, what if that list is overwhelmed.这里的问题很明显,如果该列表不堪重负怎么办。 I could set that cache relatively high, but not infinite so the list doesn't grow forever.我可以将该缓存设置得相对较高,但不是无限的,因此列表不会永远增长。 Also, there is some overhead to checking whether a message is in a list.此外,检查消息是否在列表中也有一些开销。

Is there a better solution or a way I could avoid the edge case of that list being overwhelmed and causing duplicate work?有没有更好的解决方案或方法可以避免该列表的边缘情况被淹没并导致重复工作? I'm not even sure if that's a valid concern, it's difficult to reason about.我什至不确定这是否是一个有效的担忧,这很难推理。 Is there a different approach I should try?我应该尝试不同的方法吗?

This answer is very late.这个答案很晚了。 However, the pattern that might help in this case is leadership election.然而,在这种情况下可能有帮助的模式是领导选举。 That is one node is elected to process a message from the topic while others wait till the message is processed successfully.即选择一个节点处理来自主题的消息,而其他节点则等待消息处理成功。 The leadership changes with each message.领导层随着每条消息而变化。

Apache Zooker has facility for distributed lock/leadership election, refer here Apache Zooker 具有分布式锁/领导选举的功能,请参考这里

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM