[英]Multiple Kafka Listeners With Same GroupId All Receiving Message
I have a kafka listener configured in our Spring Boot application as follows: 我在我们的Spring Boot应用程序中配置了一个kafka侦听器,如下所示:
@KafkaListener(topicPartitions = @TopicPartition(topic = 'data.all', partitions = { "0", "1", "2" }), groupId = "kms")
public void listen(ObjectNode message) throws JsonProcessingException {
// Code to convert to json string and write to ElasticSearch
}
This application gets deployed to and run on 3 servers and, despite all having the group id of kms
, they all get a copy of the message which means I get 3 identical records in Elastic. 此应用程序已部署到3个服务器上并在3个服务器上运行,尽管所有这些都具有
kms
的组ID,但它们都获得了消息的副本,这意味着我在Elastic中获得了3条相同的记录。 When I'm running an instance locally, 4 copies get written. 当我在本地运行实例时,将写入4个副本。
I've confirmed that the producer is only writing 1 message to the topic by checking the count of all messages on the topic before and after the write occurs; 我已经确认生产者通过检查写入前后该主题上所有消息的计数,只向该主题写入1条消息; it only increases by 1. How can I prevent this?
它只会增加1。如何预防呢?
When you manually assign partitions like that, you are responsible for distributing the partitions across the instances. 当您手动分配分区时,您负责在实例之间分配分区。
The group is ignored. 该组被忽略。
You must use group management and let Kafka do the partition assignment for you, or assign the partitions manually for each instance. 您必须使用组管理,并让Kafka为您分配分区,或者为每个实例手动分配分区。
Instead of topicPartitions
, use topics = "data.all"
代替
topicPartitions
,使用topics = "data.all"
What happens when you don't assign partition manually 不手动分配分区时会发生什么
A
) joins with consumer group (lets say consumer
) A
)与消费者组(让我们说consumer
)一起加入 A
as we have only one consumer group consumer
A
时,就会发生分区重新分配,因为我们只有一个使用者组consumer
B
tries to join same consumer group consumer
then again partition reassignment will happen and both A & B will get partition to listen to messages B
尝试加入同一消费者组consumer
然后再次发生分区重新分配,并且A和B都将获得分区以收听消息 What is happening in your case is, more than 1 consumer is listening to same partitions so all the consumers who are listening to same partitions within same consumer group also, will receive messages from that partition. 在您的情况下,发生的事情是,有多个消费者正在收听同一分区,因此所有在同一消费者组中收听相同分区的消费者也将从该分区接收消息。 So mutual exclusivity between consumers in a consumer group is lost due to more than 1 consumer is listening same partitions.
因此,由于一个以上的消费者在监听相同的分区,因此消费者组中的消费者之间的互斥性消失了。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.