簡體   English   中英

Spring Cloud 流生成和消費相同的主題

[英]Spring cloud stream producing and consuming the same topic

我有一個使用 Spring Boot 和 Spring Cloud Stream 的服務。 這個服務產生一個特定的主題,也消費這個主題。 當我第一次啟動服務並且這個主題在 Kafka 中不存在時,拋出以下異常:

java.lang.IllegalStateException: The number of expected partitions was: 100, but 3 have been found instead
                at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner$2.doWithRetry(KafkaTopicProvisioner.java:260) ~[spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner$2.doWithRetry(KafkaTopicProvisioner.java:246) ~[spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:286) ~[spring-retry-1.2.0.RELEASE.jar!/:na]
                at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:163) ~[spring-retry-1.2.0.RELEASE.jar!/:na]
                at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.getPartitionsForTopic(KafkaTopicProvisioner.java:246) ~[spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.createProducerMessageHandler(KafkaMessageChannelBinder.java:149) [spring-cloud-stream-binder-kafka-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.createProducerMessageHandler(KafkaMessageChannelBinder.java:88) [spring-cloud-stream-binder-kafka-1.2.1.RELEASE.jar!/:1.2.1.RELEASE]
                at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:112) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:57) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binder.AbstractBinder.bindProducer(AbstractBinder.java:152) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binding.BindingService.bindProducer(BindingService.java:124) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binding.BindableProxyFactory.bindOutputs(BindableProxyFactory.java:238) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.cloud.stream.binding.OutputBindingLifecycle.start(OutputBindingLifecycle.java:57) [spring-cloud-stream-1.2.2.RELEASE.jar!/:1.2.2.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:175) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:50) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:348) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:151) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]
                at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:114) [spring-context-4.3.7.RELEASE.jar!/:4.3.7.RELEASE]

application.yml

spring:
  cloud:
    stream:
      kafka:
        binder:
          brokers: kafka
          defaultBrokerPort: 9092
          zkNodes: zookeeper
          defaultZkPort: 2181
          minPartitionCount: 2
          replicationFactor: 1
          autoCreateTopics: true
          autoAddPartitions: true
          headers: type,message_id
          requiredAcks: 1
          configuration:
            "[security.protocol]": PLAINTEXT #TODO: This is a workaround. Should be security.protocol
        bindings:
          test-updater-input:
            consumer:
              autoRebalanceEnabled: true
              autoCommitOnError: true
              enableDlq: true
          test-updater-output: 
            producer:
              sync: true
              configuration:
                retries: 0
          tenant-updater-output: 
            producer:
              sync: true
              configuration:
                retries: 100
      default:
        binder: kafka
        contentType: application/json
        group: test-adapter
        consumer:
          maxAttempts: 1       
      bindings:
        test-updater-input: 
          destination: test-tenant-update
          consumer:
            concurrency: 3
            partitioned: true
        test-updater-output: 
          destination: test-tenant-update
          producer:
            partitionCount: 100
        tenant-updater-output:
          destination: tenant-entity-update
          producer:
            partitionCount: 100

我試圖改變生產者和消費者的配置順序,但沒有幫助。

編輯:我添加了完整的 application.yml。 當我第一次啟動服務時,Kafka 中不存在此主題。
感覺生產者和消費者配置之間有沖突,我認為它說有 3 個分區的原因是消費者中的並發是 3 所以它首先創建具有 3 個分區的主題,然后當它移動到生產者配置它不會調整分區數。

預期分區數為:100,但已找到 3 個

該主題沒有足夠的分區用於您的配置。

分區數:100

將配置固定為 3,或將主題的分區數更改為 100。

或者將spring.cloud.stream.kafka.binder.autoAddPartitions設置為true

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM