繁体   English   中英

Spring Cloud 流生成和消费相同的主题

[英]Spring cloud stream producing and consuming the same topic

我有一个使用 Spring Boot 和 Spring Cloud Stream 的服务。 这个服务产生一个特定的主题,也消费这个主题。 当我第一次启动服务并且这个主题在 Kafka 中不存在时,抛出以下异常:

java.lang.IllegalStateException: The number of expected partitions was: 100, but 3 have been found instead
                at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner$2.doWithRetry(KafkaTopicProvisioner.java:260) ~[spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner$2.doWithRetry(KafkaTopicProvisioner.java:246) ~[spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:286) ~[spring-retry-1.2.0.RELEASE.jar!/:na]
                at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:163) ~[spring-retry-1.2.0.RELEASE.jar!/:na]
                at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.getPartitionsForTopic(KafkaTopicProvisioner.java:246) ~[spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.createProducerMessageHandler(KafkaMessageChannelBinder.java:149) [spring-cloud-stream-binder-kafka-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.createProducerMessageHandler(KafkaMessageChannelBinder.java:88) [spring-cloud-stream-binder-kafka-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:112) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:57) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binder.AbstractBinder.bindProducer(AbstractBinder.java:152) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binding.BindingService.bindProducer(BindingService.java:124) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binding.BindableProxyFactory.bindOutputs(BindableProxyFactory.java:238) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binding.OutputBindingLifecycle.start(OutputBindingLifecycle.java:57) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:175) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:50) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:348) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:151) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:114) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]

application.yml

spring:
  cloud:
    stream:
      kafka:
        binder:
          brokers: kafka
          defaultBrokerPort: 9092
          zkNodes: zookeeper
          defaultZkPort: 2181
          minPartitionCount: 2
          replicationFactor: 1
          autoCreateTopics: true
          autoAddPartitions: true
          headers: type,message_id
          requiredAcks: 1
          configuration:
            "[security.protocol]": PLAINTEXT #TODO: This is a workaround. Should be security.protocol
        bindings:
          test-updater-input:
            consumer:
              autoRebalanceEnabled: true
              autoCommitOnError: true
              enableDlq: true
          test-updater-output: 
            producer:
              sync: true
              configuration:
                retries: 0
          tenant-updater-output: 
            producer:
              sync: true
              configuration:
                retries: 100
      default:
        binder: kafka
        contentType: application/json
        group: test-adapter
        consumer:
          maxAttempts: 1       
      bindings:
        test-updater-input: 
          destination: test-tenant-update
          consumer:
            concurrency: 3
            partitioned: true
        test-updater-output: 
          destination: test-tenant-update
          producer:
            partitionCount: 100
        tenant-updater-output:
          destination: tenant-entity-update
          producer:
            partitionCount: 100

我试图改变生产者和消费者的配置顺序,但没有帮助。

编辑:我添加了完整的 application.yml。 当我第一次启动服务时,Kafka 中不存在此主题。
感觉生产者和消费者配置之间有冲突,我认为它说有 3 个分区的原因是消费者中的并发是 3 所以它首先创建具有 3 个分区的主题,然后当它移动到生产者配置它不会调整分区数。

预期分区数为:100,但已找到 3 个

该主题没有足够的分区用于您的配置。

分区数:100

将配置固定为 3,或将主题的分区数更改为 100。

或者将spring.cloud.stream.kafka.binder.autoAddPartitions设置为true

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM