简体   繁体   English

我如何从 Spring Cloud Stream Kafka Binder 中的偏移量获取消息?

[英]How do i get the messages from the offset in Spring Cloud Stream Kafka Binder?

I am able to connect to the topic, although I am not sure on how I am able to get the messages from the topic.我能够连接到该主题,但我不确定如何从该主题获取消息。 Here is the logs that shows that I have 1076 records on the topic:这是显示我有 1076 条关于该主题的记录的日志:

Logs日志

2021-10-05 11:10:09.053  INFO 17364 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder    : Partitions revoked: []
2021-10-05 11:10:09.054  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-2, groupId=latest] (Re-)joining group
2021-10-05 11:10:10.063  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-2, groupId=latest] (Re-)joining group
2021-10-05 11:10:13.189  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-2, groupId=latest] Successfully joined group with generation 1
2021-10-05 11:10:13.205  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-2, groupId=latest] Setting newly assigned partitions: MY_TOPIC-0
2021-10-05 11:10:13.278  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-2, groupId=latest] Found no committed offset for partition MY_TOPIC-0
2021-10-05 11:10:13.664  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-2, groupId=latest] Resetting offset for partition MY_TOPIC-0 to offset 1076.
2021-10-05 11:10:13.730  INFO 17364 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder    : Partitions assigned: [MY_TOPIC-0]
2021-10-05 11:10:13.731  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-2, groupId=latest] Seeking to EARLIEST offset of partition MY_TOPIC-0
2021-10-05 11:10:13.787  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-2, groupId=latest] Resetting offset for partition MY_TOPIC-0 to offset 1076.
2021-10-05 11:46:44.345  INFO 17364 --- [thread | latest] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-2, groupId=latest] Group coordinator test.kafka.com:6667 (id: 2246481044 rack: null) is unavailable or invalid, will attempt rediscovery
2021-10-05 11:46:51.625  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-2, groupId=latest] Discovered group coordinator test.kafka.com:6667 (id: 2147482644 rack: null)
2021-10-05 11:46:56.514  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-2, groupId=latest] Attempt to heartbeat failed for since member id consumer-2-386b3e7b-b8a1-48c5-9gd3-5e587e4237ad is not valid.
2021-10-05 11:46:56.516  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-2, groupId=latest] Revoking previously assigned partitions [MY_TOPIC-0]
2021-10-05 11:46:56.516  INFO 17364 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder    : Partitions revoked: [MY_TOPIC-0]
2021-10-05 11:46:56.516  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-2, groupId=latest] (Re-)joining group
2021-10-05 11:46:56.572  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-2, groupId=latest] (Re-)joining group
2021-10-05 11:46:59.687  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-2, groupId=latest] Successfully joined group with generation 3
2021-10-05 11:46:59.688  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-2, groupId=latest] Setting newly assigned partitions: MY_TOPIC-0
2021-10-05 11:46:59.749  INFO 17364 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-2, groupId=latest] Setting offset for partition MY_TOPIC-0 to the committed offset FetchPosition{offset=1076, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=test.kafka.com:6667 (id: 1003 rack: /default-rack), epoch=2}}
2021-10-05 11:46:59.812  INFO 17364 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder    : Partitions assigned: [MY_TOPIC-0]

Consumer Class消费者 Class

public interface EventConsumer {

    @Input("my-group-id")
    SubscribableChannel consumeMessage();

}

Listener Class听众 Class

@Slf4j
@Component
@RequiredArgsConstructor
@EnableBinding(EventConsumer.class)
public class EventListener {

     @StreamListener(target = "my-group-id")
     public void processMessage(Object msg) {
         log.info("*** MESSAGE: ***", msg);
         **do something**
         **save messages**
     }
}

Application.yml应用.yml

kafka:
    consumer:
      properties:
        max.poll.interval.ms: 3600000
      max-poll-records: 10
  cloud:
    zookeeper:
      connect-string: test.kafka.com:2181,test.kafka.com:2181,test.kafka.com:2181
    stream:
      kafka:
        bindings:
          my-group-id:
            consumer:
              autoCommitOffset: false
        binder:
          brokers:
            - test.kafka.com:6667
            - test.kafka.com:6667
            - test.kafka.com:6667
          auto-create-topics: false
          auto-add-partitions: false
          jaas:
            controlFlag: REQUIRED
            loginModule: com.sun.security.auth.module.Krb5LoginModule
            options:
              useKeyTab: true
              storeKey: true
              serviceName: kafka
              keyTab: C:\\files\\user.keytab
              principal: user@test.com
              debug: true
          configuration:
            security:
              protocol: SASL_PLAINTEXT
      bindings:
        my-group-id:
          binder: kafka
          destination: MY_TOPIC
          group: test-kafka-service
  servlet:
    multipart:
      max-file-size: 50MB
      max-request-size: 50MB
spring.cloud.stream.kafka.bindings.my-group-id.consumer.resetOffsets: true
spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms: 300000

Upon reading the logs, it does not even go to my listener class wherein I placed a logger for it.阅读日志后,它甚至没有 go 到我的监听器 class,我在其中放置了一个记录器。 Any ideas on this?对此有什么想法吗?

To get a @StreamListener to start again at the beginning of the log, configure a group on the binding (so that auto.offset.reset gets set to earliest and set resetOffsets to true .要让@StreamListener在日志的开头再次启动,请在绑定上配置一个group (以便将 auto.offset.reset 设置为earliest并将resetOffsets设置为true

spring.cloud.stream.kafka.bindings.my-group-id.consumer.resetOffsets=true

https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/3.1.4/reference/html/spring-cloud-stream-binder-kafka.html#kafka-consumer-properties https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/3.1.4/reference/html/spring-cloud-stream-binder-kafka.html#kafka-consumer-properties

resetOffsets重置偏移量

Whether to reset offsets on the consumer to the value provided by startOffset.是否将消费者的偏移量重置为 startOffset 提供的值。 Must be false if a KafkaBindingRebalanceListener is provided;如果提供了KafkaBindingRebalanceListener ,则必须为 false; see Using a KafkaBindingRebalanceListener.请参阅使用 KafkaBindingRebalanceListener。 See Resetting Offsets for more information about this property.有关此属性的更多信息,请参阅重置偏移量。

Default: false.默认值:假。

https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/3.1.4/reference/html/spring-cloud-stream-binder-kafka.html#reset-offsets https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/3.1.4/reference/html/spring-cloud-stream-binder-kafka.html#reset-offsets

EDIT编辑

It works fine for me:这对我来说可以:

@SpringBootApplication
public class So69432739Application {

    public static void main(String[] args) {
        SpringApplication.run(So69432739Application.class, args);
    }

    @Bean
    public Consumer<String> input() {
        return System.out::println;
    }

    @Bean
    ApplicationRunner runner(KafkaOperations<byte[], byte[]> ops) {
        return args -> {
            ops.send("input-in-0", "one".getBytes());
            ops.send("input-in-0", "two".getBytes());
            ops.send("input-in-0", "three".getBytes());
        };
    }

}
spring.cloud.stream.bindings.input-in-0.group=grp
spring.cloud.stream.kafka.bindings.input-in-0.consumer.reset-offsets=true

The second time I ran it:我第二次运行它:

one
two
three
one
two
three
Setting offset for partition input-in-0-0 to the committed offset FetchPosition{offset=3, ...
Resetting offset for partition input-in-0-0 to position FetchPosition{offset=0, ...

Note that I am using the newer functional style ( @StreamListener is deprecated);请注意,我使用的是较新的功能样式(不推荐使用@StreamListener ); although that makes no difference to this functionality.尽管这对该功能没有影响。

EDIT2编辑2

You can't mix propertie and YAML like that;你不能那样混合 propertie 和 YAML; I copied your YAML (commenting out some that I don't need) and it works fine...我复制了你的 YAML(注释掉了一些我不需要的)并且它工作正常......

spring:
  kafka:
    consumer:
      properties:
        max.poll.interval.ms: 3600000
      max-poll-records: 10
  cloud:
    stream:
      kafka:
        bindings:
          my-group-id:
            consumer:
              autoCommitOffset: false
              reset-offsets: true
#        binder:
#          brokers:
#            - test.kafka.com:6667
#            - test.kafka.com:6667
#            - test.kafka.com:6667
#          auto-create-topics: false
#          auto-add-partitions: false
#          jaas:
#            controlFlag: REQUIRED
#            loginModule: com.sun.security.auth.module.Krb5LoginModule
#            options:
#              useKeyTab: true
#              storeKey: true
#              serviceName: kafka
#              keyTab: C:\\files\\user.keytab
#              principal: user@test.com
#              debug: true
#          configuration:
#            security:
#              protocol: SASL_PLAINTEXT
      bindings:
        input-int-0:
          binder: kafka
          destination: input-in-0
          group: test-kafka-service
#  servlet:
#    multipart:
#      max-file-size: 50MB
#      max-request-size: 50MB
Setting offset for partition input-in-0-0 to the committed offset FetchPosition{offset=6 ...
Seeking to EARLIEST offset of partition input-in-0-0
Resetting offset for partition input-in-0-0 to position FetchPosition{offset=0 ...

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何使用 spring 云 stream binder kafka 流依赖项使用协议缓冲区(protobuf)来使用来自 kafka 主题的消息? - How to consume messages from kafka topics using protocol buffers (protobuf) by spring cloud stream binder kafka streams dependency? 如何将 Spring Cloud Stream Functional Beans 连接到 Kafka Binder? - How do I Connect Spring Cloud Stream Functional Beans to a Kafka Binder? 如何使用 Spring Cloud Stream 和 Kafka Streams Binder 暂停(打开/关闭)stream 处理? - How can I pause (turn on/off) stream processing w/ Spring Cloud Stream & Kafka Streams Binder? 使用度量 spring_cloud_stream_binder_kafka_offset 来计算 kafka 滞后 - Using metric spring_cloud_stream_binder_kafka_offset for kafka lag spring cloud stream kafka binder 以编程方式将 kafka 主题(多个分区)偏移量重置为任意数字 - spring cloud stream kafka binder resetting a kafka topic(multiple partitions) offset to an arbitrary number programatically 如何处理 Spring 云 stream kafka 流活页夹中的序列化错误? - How to handle Serialization error in Spring cloud stream kafka streams binder? 使用 Spring Cloud Stream Kafka Binder 批量消费 Kafka 消息及其密钥 - Consuming Kafka messages with its key in batches using Spring Cloud Stream Kafka Binder 空指针:带有活页夹 kafka 的 Spring Cloud 流 - Null pointer: Spring cloud stream with binder kafka Spring Cloud Stream Kafka活页夹压缩 - Spring Cloud Stream Kafka binder compression Kafka中的JSON错误-Spring Cloud Stream Kafka Binder - Bad JSON in Kafka - Spring Cloud Stream Kafka Binder
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM