簡體   English   中英

為什么使用 Kafka 的 spring-boot 無法啟動?

[英]Why spring-boot with Kafka failed to start?

有帶有kafka依賴的spring-boot應用程序,有兩個Kafka主題,需要從中讀取消息

tacocloud.orders.topic
tacocloud.tacos.topic

並且已經成功在其中發送了消息

配置 kafka 配置以像這樣收聽此主題

 package com.example.tacocloud.config;

import com.example.tacocloud.model.jpa.Order;
import com.example.tacocloud.model.jpa.Taco;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.ContainerProperties;
import org.springframework.kafka.listener.KafkaMessageListenerContainer;
import org.springframework.kafka.listener.MessageListener;

import java.util.HashMap;
import java.util.Map;

@Slf4j
@Configuration
@EnableKafka
@EnableConfigurationProperties
public class KafkaConfig {

  // Order

    @Bean
    @ConfigurationProperties("spring.kafka.consumer.order")
    public Map<String, Object> orderConsumerConfig() {
        return new HashMap<>();
    }

    @Bean
    public DefaultKafkaConsumerFactory<Object, Order> orderConsumerFactory(@Qualifier("orderConsumerConfig")
        Map<String, Object> orderConsumerConfig) {
        return new DefaultKafkaConsumerFactory<>(orderConsumerConfig);
    }

    @Bean
    public ConcurrentKafkaListenerContainerFactory<String, Order> order1KafkaMessageListenerContainer(
        @Qualifier("orderConsumerConfig") Map<String, Object> orderConsumerConfig,
        @Qualifier("orderConsumerFactory") DefaultKafkaConsumerFactory orderConsumerFactory) {
        ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory();
        factory.setConsumerFactory(orderConsumerFactory);
        return factory;
    }

    // Taco

    @Bean
    @ConfigurationProperties("spring.kafka.consumer.taco")
    public Map<String, Object> tacoConsumerConfig() {
        return new HashMap<>();
    }

    @Bean
    public DefaultKafkaConsumerFactory tacoConsumerFactory(
        @Qualifier("tacoConsumerConfig") Map<String, Object> tacoConsumerConfig) {
        return new DefaultKafkaConsumerFactory<>(tacoConsumerConfig);
    }

    @Bean
    public ConcurrentKafkaListenerContainerFactory tacoConcurrentMessageListenerContainer(
        @Qualifier("tacoConsumerConfig") Map<String, Object> tacoConsumerConfig,
        @Qualifier("tacoConsumerFactory") DefaultKafkaConsumerFactory tacoConsumerFactory) {
        ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory();
        factory.setConsumerFactory(tacoConsumerFactory);
        return factory;
    }
}

因此,有兩個 DefaultKafkaConsumerFactory 和兩個 ConcurrentKafkaListenerContainerFactory 之后,使用 @KafkaListener 日志消息創建了一個服務:

package com.example.tacocloud.service;

import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Service;

@Service
@EnableKafka
public class KafkaService {

    @KafkaListener(topics = "tacocloud.orders.topic", groupId = "one")
    public void order() {
        System.out.println("Order");
    }

    @KafkaListener(topics ="tacocloud.tacos.topic", groupId = "two")
    public void taco() {
        System.out.println("Taco");
    }
}

應用程序.yml 文件

spring:
  kafka:
    consumer:
      order:
        topic: tacocloud.orders.topic
        "[bootstrap.servers]": localhost:29888
        "[key.deserializer]": org.apache.kafka.common.serialization.StringDeserializer
        "[value.deserializer]": com.example.tacocloud.model.serialization.OrderDeserializer
        template:
          "[default.topic]": tacocloud.orders.topic
      taco:
        topic: tacocloud.tacos.topic
        "[bootstrap.servers]": localhost:29888
        "[key.deserializer]": org.apache.kafka.common.serialization.StringDeserializer
        "[value.deserializer]": com.example.tacocloud.model.serialization.TacoDeserializer
        template:
          "[default.topic]": tacocloud.tacos.topic

但是,當我啟動我的 spring-boot 應用程序時,我看到錯誤消息(((

2022-04-15 21:38:35.918 錯誤 13888 --- [restartedMain] osboot.SpringApplication:應用程序運行失敗

org.springframework.context.ApplicationContextException:無法啟動 bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; 嵌套異常是 org.apache.kafka.common.config.ConfigException: Missing required configuration "key.deserializer" 沒有默認值。 在 org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:181) ~[spring-context-5.3.16.jar:5.3.16] 在 org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor .java:54) ~[spring-context-5.3.16.jar:5.3.16] at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356) ~[spring-context-5.3. 16.jar:5.3.16] 在 java.base/java.lang.Iterable.forEach(Ite​​rable.java:75) ~[na:na] 在 org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java: 155) ~[spring-context-5.3.16.jar:5.3.16] 在 org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123) ~[spring-context-5.3.16.jar:5.3 .16] 在 org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:935) ~[spring-context-5.3.16.jar:5.3.16] 在 org.springframework.context.supp ort.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586) ~[spring-context-5.3.16.jar:5.3.16] at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145 ) ~[spring-boot-2.6.4.jar:2.6.4] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:740) ~[spring-boot-2.6.4.jar:2.6.4]在 org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:415) ~[spring-boot-2.6.4.jar:2.6.4] 在 org.springframework.boot.SpringApplication.run(SpringApplication.java:303) ~[spring-boot-2.6.4.jar:2.6.4] 在 org.springframework.boot.SpringApplication.run(SpringApplication.java:1312) ~[spring-boot-2.6.4.jar:2.6.4] 在org.springframework.boot.SpringApplication.run(SpringApplication.java:1301) ~[spring-boot-2.6.4.jar:2.6.4] at com.example.tacocloud.TacoCloudApplication.main(TacoCloudApplication.java:10) ~ [classes/:na] 在 java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invo ke0(Native Method) ~[na:na] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl .invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na] at org.springframework.boot.devtools .restart.RestartLauncher.run(RestartLauncher.java:49) ~[spring-boot-devtools-2.6.4.jar:2.6.4] 原因:org.apache.kafka.common.config.ConfigException: Missing required configuration" key.deserializer”,沒有默認值。 在 org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:493) ~[kafka-clients-2.8.0.jar:na] 在 org.apache.kafka.common.config.ConfigDef.parse( ConfigDef.java:483) ~[kafka-clients-2.8.0.jar:na] at org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:108) ~[kafka-clients-2.8.0. jar:na] 在 org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:129) ~[kafka-clients-2.8.0.jar:na] 在 org.apache.kafka.clients.consumer.ConsumerConfig .(ConsumerConfig.java:640) ~[kafka-clients-2.8.0.jar:na] at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:665) ~[kafka-clients-2.8. 0.jar:na] 在 org.springframework.kafka.core.DefaultKafkaConsumerFactory.createRawConsumer(DefaultKafkaConsumerFactory.java:416) ~[spring-kafka-2.8.3.jar:2.8.3] 在 org.springframework.kafka.core。 DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:384) ~[spring-kafka-2.8.3.jar:2.8.3] at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createC onsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:360)~[spring-kafka-2.8.3.jar:2.8.3] at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:327)~[spring-kafka-2.8 .3.jar:2.8.3] 在 org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:304) ~[spring-kafka-2.8.3.jar:2.8.3] 在 org.springframework.kafka .listener.KafkaMessageListenerContainer$ListenerConsumer.(KafkaMessageListenerContainer.java:758) ~[spring-kafka-2.8.3.jar:2.8.3] at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:344) ~ [spring-kafka-2.8.3.jar:2.8.3] 在 org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:442) ~[spring-kafka-2.8.3.jar:2.8.3]在 org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:209) ~[spring-kafka-2. 8.3.jar:2.8.3] 在 org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:442) ~[spring-kafka-2.8.3.jar:2.8.3] 在 org.springframework.kafka。 config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:331) ~[spring-kafka-2.8.3.jar:2.8.3] at org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:276) ~[spring -kafka-2.8.3.jar:2.8.3] 在 org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178) ~[spring-context-5.3.16.jar:5.3.16] .. . 19個常用框架省略

進程以退出代碼 0 結束

謝謝你的樣品。

所以,我在本地打開它並在這個 bean 定義中放置了一個斷點:

@Bean
public DefaultKafkaConsumerFactory<Object, Order> orderConsumerFactory(@Qualifier("orderConsumerConfig")
    Map<String, Object> orderConsumerConfig) {
    return new DefaultKafkaConsumerFactory<Object, Order>(orderConsumerConfig);
}

orderConsumerConfig映射如下所示:

orderConsumerConfig = {LinkedHashMap@8587}  size = 1
 "orderConsumerConfig" -> {HashMap@8600}  size = 5
  key = "orderConsumerConfig"
  value = {HashMap@8600}  size = 5
   "key.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer"
   "template" -> {LinkedHashMap@8611}  size = 1
   "topic" -> "tacocloud.orders.topic"
   "bootstrap.servers" -> "localhost:29888"
   "value.deserializer" -> "sample.kafka.serializer.OrderDeserializer"

因此,您的KafkaConsumer未正確初始化確實不足為奇。 您的目標地圖配置隱藏在此注入地圖的orderConsumerConfig條目下。

您介意與我分享一下您是從哪里了解到Map bean 上的@ConfigurationProperties的嗎? 以及如何將其用作依賴注入?

我想根據屬性做類似的事情(配置多個ConsumerFactories )。 我使用@ConfigurationProperties創建了一個Map<String,String>而不是Map<String,Object> ,然后將該地圖的項目添加到一個新的Map<String,Object>中。 不知道為什么 Spring-Boot 以這種方式加載Map<String,Object>

@Bean
@ConfigurationProperties("taco-cart.kafka")
public Map<String, String> tacoCartKafkaProperties() {
    return new HashMap<>();
}

@Bean
public ConsumerFactory<String, TacoCart> tacoCartConsumerFactory(@Qualifier("tacoCartKafkaProperties") Map<String, String> tacoCartKafkaProperties) {

    // Convert map.
    Map<String, Object> config = new HashMap<>();
    config.putAll(tacoCartKafkaProperties);

    return new DefaultKafkaConsumerFactory<>(config);
}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM