简体   繁体   English

Spring Cloud Kafka Stream:使用 Avro 发布到 DLQ 失败

[英]Spring Cloud Kafka Stream: Publishing to DLQ is failing with Avro

I'm unable to publish to Dlq topic while using ErrorHandlingDeserializer for handling the errors with combination of Avro.在使用 ErrorHandlingDeserializer 结合 Avro 处理错误时,我无法发布到 Dlq 主题。 Below is the error while publishing.以下是发布时的错误。

Topic TOPIC_DLT not present in metadata after 60000 ms. 60000 毫秒后元数据中不存在主题 TOPIC_DLT。 ERROR KafkaConsumerDestination{consumerDestinationName='TOPIC', partitions=6, dlqName='TOPIC_DLT'}.container-0-C-1 osihLoggingHandler:250 - org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.cloud.stream.function.FunctionConfiguration$FunctionToDestinationBinder$1@49abe531];错误 KafkaConsumerDestination{consumerDestinationName='TOPIC',partitions=6,dlqName='TOPIC_DLT'}.container-0-C-1 osihLoggingHandler:250 - org.springframework.messaging.MessageHandlingException:消息处理程序中发生错误 [org.springframework.cloud .stream.function.FunctionConfiguration$FunctionToDestinationBinder$1@49abe531]; nested exception is java.lang.RuntimeException: failed, failedMessage=GenericMessage嵌套异常是 java.lang.RuntimeException: failed, failedMessage=GenericMessage

And here is the application.yml这是application.yml

spring:
  cloud:
    stream:
      bindings:
        process-in-0:
          destination: TOPIC
          group: groupID
      kafka:
        binder:
          brokers:
            - xxx:9092
          configuration:
            security.protocol: SASL_SSL
            sasl.mechanism: PLAIN
          jaas:
            loginModule: org.apache.kafka.common.security.plain.PlainLoginModule
            options:
              username: username
              password: pwd
          consumer-properties:
            key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
            value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
            spring.deserializer.value.delegate.class: io.confluent.kafka.serializers.KafkaAvroDeserializer
          producer-properties:
            key.serializer: org.apache.kafka.common.serialization.StringSerializer
            value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer            
        bindings:
          process-in-0:
            consumer:
              configuration:
                basic.auth.credentials.source: USER_INFO
                schema.registry.url: registryUrl
                schema.registry.basic.auth.user.info: user:pwd
                security.protocol: SASL_SSL
                sasl.mechanism: PLAIN
              max-attempts: 1
              dlqProducerProperties:
                configuration:
                  basic.auth.credentials.source: USER_INFO
                  schema.registry.url: registryUrl
                  schema.registry.basic.auth.user.info: user:pwd
                key.serializer: org.apache.kafka.common.serialization.StringSerializer
                value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
              deserializationExceptionHandler: sendToDlq
              ackEachRecord: true
              enableDlq: true
              dlqName: TOPIC_DLT
              autoCommitOnError: true
              autoCommitOffset: true

I'm using the following dependencies:我正在使用以下依赖项:

spring-cloud-dependencies - 2021.0.1 spring-cloud-dependencies - 2021.0.1
spring-boot-starter-parent - 2.6.3 spring-boot-starter-parent - 2.6.3
spring-cloud-stream-binder-kafka spring-cloud-stream-binder-kafka
kafka-schema-registry-client - 5.3.0 kafka-schema-registry-client - 5.3.0
kafka-avro-serializer - 5.3.0 kafka-avro-serializer - 5.3.0

Im not sure what exactly im missing.我不确定我到底错过了什么。

After going through a lot of documentation, I found out that for spring to do the job of posting DLQ, we need to have the same number of partitions for both Original topic and DLT Topic.在查阅了大量文档后,我发现要让 spring 完成发布 DLQ 的工作,我们需要为 Original topic 和 DLT Topic 拥有相同数量的分区。 And if it can't be done then we need to set dlqPartitions to 1 or manually provide the DlqPartitionFunction bean.如果无法完成,我们需要将 dlqPartitions 设置为 1 或手动提供 DlqPartitionFunction bean。 By providing dlqPartitions: 1 all the messages will go to partition 0.通过提供 dlqPartitions: 1 所有消息将转到分区 0。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM