简体   繁体   English

MassTransit - UsePartitioner - 多个消费者

[英]MassTransit - UsePartitioner - Multiple Consumers

We recently ran into an issue where users submit a request multiple times and because we're using the unit of work pattern inside of our consumers, those multiple requests would create duplicate records inside of our database.我们最近遇到了一个问题,用户多次提交请求,因为我们在消费者内部使用工作单元模式,这些多次请求会在我们的数据库中创建重复记录。

We saw that there was a UsePartitioner method that we can add so that messages are partitioned on an ID that we set, and this would make the consumer wait until a message with that partitioned ID is done before starting on the next.我们看到有一个我们可以添加的 UsePartitioner 方法,以便消息根据我们设置的 ID 进行分区,这将使消费者等待,直到具有该分区 ID 的消息完成,然后再开始下一条消息。 This seems to work just fine locally with my docker setup where I'm only running one container per service.这似乎在我的 docker 设置中在本地工作得很好,我在每个服务中只运行一个容器。 However, I noticed that when we deploy this to our other environments, we're still having that issue with duplicate records trying to be generated.但是,我注意到当我们将它部署到我们的其他环境时,我们仍然遇到试图生成重复记录的问题。 I can't think of what else it could be unless maybe because our other environments have multiple containers/consumers running, that maybe the partitioning only happens on the single consumer and not shared?我想不出还有什么可能,除非可能是因为我们的其他环境有多个容器/消费者在运行,分区可能只发生在单个消费者上而不是共享? Or is there an additional setting that we're missing?还是我们缺少其他设置?

I should also add that we are using kubernetes.我还应该补充一点,我们正在使用 kubernetes。 In our dev environment, we have 4 pods running so all 4 pods have an instance of this consumer.在我们的开发环境中,我们有 4 个 pod 正在运行,所以所有 4 个 pod 都有一个这个消费者的实例。

在此处输入图像描述

    public class TestConsumerDefinition : ConsumerDefinition<TestConsumer>
    {
        public TestConsumerDefinition()
        {
            ConcurrentMessageLimit = 20;
        }

        protected override void ConfigureConsumer(IReceiveEndpointConfigurator endpointConfigurator,
            IConsumerConfigurator<TestConsumer> consumerConfigurator)
        {
            var partitioner = consumerConfigurator.CreatePartitioner(ConcurrentMessageLimit.Value);

            consumerConfigurator.Message<TestMessage>(
                x =>
                    x.UsePartitioner(partitioner, m =>
                        $"{m.Message.DrugId}-{m.Message.PatientId}"));
        }
    }

Thank you.谢谢你。

First, there isn't a facility to partition across load balanced consumers on separate instances.首先,没有一种工具可以在不同实例上对负载均衡的消费者进行分区。 You could build your own distributed lock, but...您可以构建自己的分布式锁,但是...

The best approach would be to ensure your consumer logic is idempotent.最好的方法是确保您的消费者逻辑是幂等的。 Either using an upsert, or checking if the data already exists before adding it.使用更新插入,或在添加数据之前检查数据是否已存在。 Or, for extra credit, add the appropriate database-level constraint to prevent duplicates (either using a unique constraint or index).或者,为了额外加分,添加适当的数据库级约束以防止重复(使用唯一约束或索引)。

The reason being, even a partitioner isn't going to prevent two requests a second apart from having the same data.原因是,即使是分区程序也不会阻止两次请求具有相同的数据。 So idempotent operations are important when dealing with distributed system.所以在处理分布式系统时,幂等操作很重要。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM