简体   繁体   English

Spring Kafka - 消费者有时收不到消息

[英]Spring Kafka - Consumers don't receive message sometimes

I have a multi client spring boot application which send and receive kafka streams between its clients (which essentially means the application has a consumer and a producer in it).我有一个多客户端 spring 启动应用程序,它在其客户端之间发送和接收 kafka 流(这实际上意味着应用程序中有一个消费者和一个生产者)。 The configuration is as simply as it can be:配置尽可能简单:

Inside @SpringBootApplication class (could be placed in a @Configuration class as well, but I didn't felt the need to create a new class only for that bean purpose):在@SpringBootApplication class 内部(也可以放在@Configuration class 中,但我觉得没有必要仅为该bean 目的创建一个新的 class):

@Bean
public NewTopic generalTopic() {
    return TopicBuilder.name("topic")
            .partitions(10)
            .replicas(10)
            .build();
}

Kafka producer and consumer configuration classes? Kafka生产者和消费者配置类? We don't do that here, instead KafkaTemplate is injected in the class which is going to send the message:我们在这里不这样做,而是将 KafkaTemplate 注入要发送消息的 class 中:

@Autowired
private KafkaTemplate<String, String> kafkaTemplate;

To produce the message, just invoke method "send(K, V) from KafkaTemplate":要生成消息,只需从 KafkaTemplate 调用方法“send(K, V)”:

kafkaTemplate.send("topic", "Hello World!");

To consume the messages, a @KafkaListener is used:要使用消息,请使用 @KafkaListener:

@KafkaListener(topics="topic", groupId="topic")
public void consumer(String message) {
    System.out.println(message);
}

The properties are in application.properties:属性在 application.properties 中:

spring.kafka.bootstrap-servers=194.113.64.103:9092

spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer= org.apache.kafka.common.serialization.StringSerializer

Every single client runs all this code.每个客户端都运行所有这些代码。 The application is sending and consuming the messages, although, for some reason, sometimes the consumer receives the message, sometimes id does not (maybe the application sometimes does not send the messages? I doubt this one but who knows).应用程序正在发送和使用消息,尽管出于某种原因,有时消费者会收到消息,有时 id 没有(也许应用程序有时不发送消息?我怀疑这一点,但谁知道)。 The interval between the received and unreceived messages are minimal: it varies between 1 - 10 seconds.收到消息和未收到消息之间的间隔很小:它在 1 到 10 秒之间变化。 So let's say I send a message once in a second (message being equal to "1" to "10").所以假设我每秒发送一次消息(消息等于“1”到“10”)。 Sometimes I receive "1", "2", "6", "8", sometimes "4", "7", "8", "9".有时我会收到“1”、“2”、“6”、“8”,有时会收到“4”、“7”、“8”、“9”。 It seems to be completely random.这似乎是完全随机的。 Note that my server is running in another continent (US, all clients are located in South America).请注意,我的服务器在另一个大陆(美国,所有客户端都位于南美洲)运行。

Any thoughts?有什么想法吗?

PS: I know placing my server IP is a huge security breach, although this is a temporary test server and nothing else than the kafka broker runs in there, so it is not a problem. PS:我知道放置我的服务器 IP 是一个巨大的安全漏洞,虽然这是一个临时测试服务器,除了 kafka 代理在那里运行,所以这不是问题。 I decided to keep the path in this post so everyone could test the described behavior.我决定在这篇文章中保留路径,以便每个人都可以测试所描述的行为。

Response to响应

"./kafka-consumer-groups.sh --describe 'topic' --bootstrap-server 194.113.64.103:9092 --all-groups" "./kafka-consumer-groups.sh --describe 'topic' --bootstrap-server 194.113.64.103:9092 --all-groups"

    GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                           HOST             CLIENT-ID
topic           topic           1          0               0               0               consumer-topic-1-5b2ea195-f747-4d53-a17a-53c20a768a5f /168.194.160.183 consumer-topic-1
topic           topic           0          0               0               0               consumer-topic-1-5b2ea195-f747-4d53-a17a-53c20a768a5f /168.194.160.183 consumer-topic-1
topic           topic           4          1               1               0               consumer-topic-1-5b2ea195-f747-4d53-a17a-53c20a768a5f /168.194.160.183 consumer-topic-1
topic           topic           3          2               2               0               consumer-topic-1-5b2ea195-f747-4d53-a17a-53c20a768a5f /168.194.160.183 consumer-topic-1
topic           topic           2          0               0               0               consumer-topic-1-5b2ea195-f747-4d53-a17a-53c20a768a5f /168.194.160.183 consumer-topic-1
topic           topic           7          1               1               0               consumer-topic-1-c724077c-e911-4d6c-bb1d-1cba17c26a02 /168.194.160.183 consumer-topic-1
topic           topic           6          0               0               0               consumer-topic-1-c724077c-e911-4d6c-bb1d-1cba17c26a02 /168.194.160.183 consumer-topic-1
topic           topic           5          0               0               0               consumer-topic-1-c724077c-e911-4d6c-bb1d-1cba17c26a02 /168.194.160.183 consumer-topic-1
topic           topic           9          1               1               0               consumer-topic-1-c724077c-e911-4d6c-bb1d-1cba17c26a02 /168.194.160.183 consumer-topic-1
topic           topic           8          1               1               0               consumer-topic-1-c724077c-e911-4d6c-bb1d-1cba17c26a02 /168.194.160.183 consumer-topic-1
[root@my-vps bin]#

You can test my application by placing it in your Main class:您可以通过将我的应用程序放在主 class 中来测试我的应用程序:

@Bean
CommandLineRunner commandLineRunner(KafkaTemplate<String, String> kafkaTemplate) {
    return args -> {
        for (int i = 0; i < 10; i++)
            kafkaTemplate.send("topic", "Hello! " + i);
    };
}

From what you've shown in your code, everything is working as expected.根据您在代码中显示的内容,一切都按预期工作。

Your producer doesn't include a key, therefore events are round-robined across your 10 topic partitions.您的生产者不包含密钥,因此事件在您的 10 个主题分区中循环。 (sidenote: you don't need such a high replication factor). (旁注:你不需要这么高的复制因子)。 You are calling send(String topic, V value) , not send(String topic, K key, V value)您正在调用send(String topic, V value) ,而不是send(String topic, K key, V value)

Your @KafkaListener has a hard-coded groupId , so since that's not any dynamic property, and Kafka consumers in the same group cannot read the same partitions at the same time, then by running multiple consumers (as shown by your 2 consumer-ids in the consumer group description), they individually will read different partitions.您的@KafkaListener有一个硬编码的groupId ,因此由于这不是任何动态属性,并且同一组中的 Kafka 消费者无法同时读取相同的分区,因此通过运行多个消费者(如您的 2 个consumer-ids所示消费者组描述),他们将分别读取不同的分区。 As mentioned above, in your producer, records are being distributed amongst various partitions.如上所述,在您的生产者中,记录分布在各个分区中。

You can see in the log output for the consumers which partitions they do get assigned, but you can also add a listener in Spring-Kafka for a partition onAssignment event to check this yourself.您可以在日志 output 中查看消费者分配了哪些分区,但您也可以在 Spring-Kafka 中为分区 onAssignment 事件添加侦听器以自行检查。

maybe the application sometimes does not send the messages也许应用程序有时不发送消息

That is correct.那是对的。 Kafka sends batches of data, by default, not one at a time. Kafka 默认发送一批数据,而不是一次一个。 You need to manually flush a producer to guarantee all events will be sent.您需要手动刷新生产者以确保将发送所有事件。

Notice that the end-offset column doesn't add up to a factor of 10, so only about 5 events were actually sent.请注意,结束偏移列的总和不是 10 倍,因此实际上只发送了大约 5 个事件。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM