简体   繁体   English

Kafka使用者未加入自定义groupId

[英]Kafka consumer doesn't join custom groupId

I setup Kafka ConsumerFactory according to Spring Kafka documentation. 我根据Spring Kafka文档设置了Kafka ConsumerFactory。 However the groupId doesn't seem to be used. 但是,似乎没有使用groupId。 Maybe I also am just getting the whole thing wrong so I wanted to let you know what I experienced. 也许我也只是把整个事情弄错了,所以我想让你知道我的经历。

This is my configuration that doesn't seem to work: 这是我的配置,似乎不起作用:

@Bean
ConsumerFactory<String, KafkaEvent> kafkaEventConsumerFactory() {
    return new DefaultKafkaConsumerFactory<>(
            getConsumerProperties(),
            new StringDeserializer(),
            new JsonDeserializer<>(KafkaEvent.class));
}

Map<String, Object> getConsumerProperties() {
    Map<String, Object> props = new HashMap<>();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092"); // TODO
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "myGroupId");
    props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);


    props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 3);
    props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 120000);

    props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 45000);
    props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 70000);

    return props;
}

And I have a @KafkaEventListener configured like this, without specifying the groupId explicitly again: 我有一个@KafkaEventListener配置的@KafkaEventListener ,而无需再次明确指定groupId:

@KafkaListener(topics = KafkaEventPublisher.ORDER_TOPIC)
public class KafkaEventListener {

   @Autowired
   private ConsumerFactory<String, KafkaEvent> consumerFactory;

   @KafkaHandler
   public void listenTo(@Payload KafkaEvent event) {
       LOGGER.error(LogMarker.KAFKA, consumerFactory.getConfigurationProperties().toString());
   }

}

I can also see that my groupId "myGroupId" is contained in the error Log as logged above. 我还可以看到我的groupId“ myGroupId”包含在上面记录的错误日志中。 However what makes me suspicious is the DEBUG logging of some ConsumerCoordinator which always states to join a different groupId and I am a bit concerned that this looks correct. 但是,令我感到疑惑的是某些ConsumerCoordinator的DEBUG日志记录,该日志始终声明要加入不同的groupId,我有点担心这看起来是正确的。

2017-09-04 15:28:13.904 (    ) INFO consumer.internals.AbstractCoordinator             - Successfully joined group org.springframework.kafka.KafkaListenerEndpointContainer#0 with generation 40
2017-09-04 15:28:13.904 (    ) INFO consumer.internals.AbstractCoordinator             - Successfully joined group org.springframework.kafka.KafkaListenerEndpointContainer#0 with generation 40
2017-09-04 15:28:13.906 (    ) INFO consumer.internals.ConsumerCoordinator             - Setting newly assigned partitions [] for group org.springframework.kafka.KafkaListenerEndpointContainer#0
2017-09-04 15:28:13.907 (    ) INFO consumer.internals.ConsumerCoordinator             - Setting newly assigned partitions [my-topic-0] for group org.springframework.kafka.KafkaListenerEndpointContainer#0

Also on Spring Startup the ConsumerConfig is outputted. 在Spring Startup上,也会输出ConsumerConfig。 I can see that the groupId is wrong, however other attributes are taken over correctly. 我可以看到groupId错误,但是其他属性已被正确接管。

As far as I understood I can set the groupId globally by setting it on the ConsumerFactory or by setting it in application.properties using spring.kafka.consumer.group-id . 据我了解,我可以通过在ConsumerFactory上进行设置或通过使用spring.kafka.consumer.group-id在application.properties中进行设置来全局设置groupId。 Both variants don't work though. 两种变体均不起作用。

Only when I configure the groupId using @KafkaListener annotation the LOG states that the consumer joined the correct group: 仅当我使用@KafkaListener批注配置groupId时,LOG才表明使用者已加入正确的组:

2017-09-04 15:38:30.787 (    ) DEBUG consumer.internals.AbstractCoordinator             - Received successful JoinGroup response for group myGroupId: org.apache.kafka.common.requests.JoinGroupResponse@4c51c449

With this config: 使用此配置:

@KafkaListener(topics = KafkaEventPublisher.ORDER_TOPIC, groupId = "myGroupId")

We are using Spring Boot 2.0.0.M3 (thus, Spring Kafka 2.0.0.M3) 我们正在使用Spring Boot 2.0.0.M3(因此是Spring Kafka 2.0.0.M3)

It's a bug in M3; 这是M3中的错误; fixed on master (2.0.3.BUILD-SNAPSHOT) (and in 1.3.0.M2). 在主服务器 (2.0.3.BUILD-SNAPSHOT)(和1.3.0.M2)中已修复。 We are expecting to release the 2.0.0.RC1 release candidate later this week (waiting for the Spring Framework RC4). 我们期望在本周晚些时候发布2.0.0.RC1的候选版本(等待Spring Framework RC4)。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 我构建了一个从具有指定 groupId 的主题消费的 kafka 消费者。 如果我将 groupID 更改为从偏移量 0 开始,它不起作用 - I've built a kafka consumer that consumes from a topic with a specified groupId. If I changed the groupID to start from offset 0, it doesn't work 实施自定义的Kafka Consumer - Implement custom Kafka Consumer Kafka Consumer订阅topic但不消费消息,不出现在消费者组列表中 - Kafka Consumer subscribes to topic but doesn't consume messages and doesn't appear in consumer group list Spark Kafka 流不会在工作节点上分配消费者负载 - Spark Kafka streaming doesn't distribute consumer load on worker nodes Spring引导Kafka不起作用 - 消费者没有收到消息 - Spring boot Kafka doesn't work - consumer not receiving messages Spring-Kafka消费者未收到消息 - Spring-Kafka consumer doesn't receive messages Spring-Kafka消费者不会自动接收消息 - Spring-Kafka consumer doesn't receive messages automatically 处理(Drop and Log) Kafka 生产者发布的坏数据,这样Spark (Java) Consumer 不会将其存储在HDFS 中 - Handle(Drop and Log) bad data published by Kafka producer , such that Spark (Java) Consumer doesn't store it in HDFS 检查 Kafka Consumer 是否没有任何记录要返回并且在 java 中为空的好方法? - Good way to check if Kafka Consumer doesn't have any records to return and is empty in java? 为什么Apache Kafka使用者不使用Log4j2根记录器? - Why doesn't the Apache Kafka consumer use the Log4j2 root logger?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM