[英]Multiple kafka Consumer Not receiving messages
我正在使用嵌入式KafkaBroker 和Kafka Binder Streams。
@Configuration
@Profile({"dev", "test"})
@Slf4j
public class EmbeddedKafkaBrokerConfig {
private static final String TMP_EMBEDDED_KAFKA_LOGS =
String.format("/tmp/embedded-kafka-logs-%1$s/", UUID.randomUUID());
private static final String PORT = "port";
private static final String LOG_DIRS = "log.dirs";
private static final String LISTENERS = "listeners";
private static final Integer KAFKA_PORT = 9092;
private static final String LISTENERS_VALUE = "PLAINTEXT://localhost:" + KAFKA_PORT;
private static final Integer ZOOKEEPER_PORT = 2181;
private EmbeddedKafkaBroker embeddedKafkaBroker;
/**
* bean for the embeddedKafkaBroker.
*
* @return local embeddedKafkaBroker
*/
@Bean
@Qualifier("embeddedKafkaBroker")
public EmbeddedKafkaBroker embeddedKafkaBroker() {
Map<String, String> brokerProperties = new HashMap<>();
brokerProperties.put(LISTENERS, LISTENERS_VALUE);
brokerProperties.put(PORT, KAFKA_PORT.toString());
brokerProperties.put(LOG_DIRS, TMP_EMBEDDED_KAFKA_LOGS);
this.embeddedKafkaBroker =
new EmbeddedKafkaBroker(1, true, 2)
.kafkaPorts(KAFKA_PORT)
.zkPort(ZOOKEEPER_PORT)
.brokerProperties(brokerProperties);
return embeddedKafkaBroker;
}
/** close the embeddedKafkaBroker on destroy. */
@PreDestroy
public void preDestroy() {
if (embeddedKafkaBroker != null) {
log.warn("[EmbeddedKafkaBrokerConfig] destroying kafka broker {}", embeddedKafkaBroker);
embeddedKafkaBroker.destroy();
}
}
}
使用 Rest Controller 觸發向主題發布數據
@RestController
@RequestMapping("/v1/demo/")
public class DemoController {
@Autowired
DemoSupplier demoSupplier;
@GetMapping("hello")
public String helloController(){
demoSupplier.supply();
return "Hello World!";
}
}
DemoSupplier.class
@Component
public class DemoSupplier {
@Autowired
@Qualifier("embeddedKafkaBroker")
public EmbeddedKafkaBroker kafkaBroker;
@Autowired
private KafkaTemplate<String,String> kafkaTemplate;
@Value("${demo.topic}")
private String topicName;
@Bean
public KafkaTemplate<String, String> stringKafkaTemplate(){
Map<String, Object> producerConfigs =new HashMap<>();
producerConfigs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"localhost:9092");
producerConfigs.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
producerConfigs.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
return new KafkaTemplate<>(new DefaultKafkaProducerFactory<>(producerConfigs));
}
public void supply(){
for(int i =0 ;i<100;i++){
kafkaTemplate.send(topicName, "Message:"+i*2);
}
}
}
消費者
@Component
public class DemoConsumer {
@Bean
@Qualifier("demoConsumerProcessor")
public Consumer<KStream<String, String>> demoConsumerProcessor(){
return input -> input.foreach(((key, value) -> System.out.println(value)));
}
@Bean
@Qualifier("demoConsumerProcessor2")
public Consumer<KStream<String, String>> demoConsumerProcessor2(){
return input -> input.foreach(((key, value) -> System.out.println("This is second consumer 2: "+value)));
}
}
Application.properties-
# ===============================
# = Profiles
# ===============================
spring.profiles.active=dev
server.port=8181
# ===============================
# = Kafka Topics
# ===============================
demo.topic=demoTopic
object.demo.topic=objectDemoTopic
# ===============================
# = SPRING CLOUD STREAM
# ===============================
spring.cloud.stream.bindings.demoConsumerProcessor-in-0.destination=demoTopic
spring.cloud.stream.bindings.demoConsumerProcessor2-in-0.destination=demoTopic
spring.cloud.stream.function.definition=demoConsumerProcessor,demoConsumerProcessor2
spring.cloud.stream.kafka.streams.binder.functions.demoConsumerProcessor.applicationId=group_id
spring.cloud.stream.kafka.streams.binder.functions.demoConsumerProcessor2.applicationId=group_id
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
注意-在這個屬性(spring.cloud.stream.function.definition)中,最先出現的bean的名稱將消耗發布到主題的消息。 但只有其中一個收到了。 根據我使用 applicationId 的知識集,兩個消費者都具有相同的組 ID,在日志中也看到了相同的結果。
現在我的推論來了——
嵌入式 Kafka 創建的分區數始終為 1。我在創建其 bean 時嘗試將其更改為 2(請參閱它的構造函數-(計數:1,受控關機:true,分區:2)。但我認為有些事情不是在地方。
重要日志——
[Consumer clientId=group_id-359878ed-1b41-4cf0-b9b8-6e21e5e1f0fe-StreamThread-1-consumer, groupId=group_id] Updating assignment with
Assigned partitions: [demoTopic-0]
Current owned partitions: []
Added partitions (assigned - owned): [demoTopic-0]
Revoked partitions (owned - assigned): []
Consumer clientId=group_id-4dce1ba5-7d97-4c18-92c3-cb79dab271b5-StreamThread-1-consumer, groupId=group_id] Updating assignment with
Assigned partitions: []
Current owned partitions: []
Added partitions (assigned - owned): []
Revoked partitions (owned - assigned): []
現在根據日志,可能只為該主題創建了一個分區。
現在有些困惑是關於“更新分配”,我必須設置它們的更多屬性以使用多個消費者。 或者 EmbeddedKafa 的一些問題。 請從其他角度看,不想成為 XY 問題。 完整的日志太大。 如果需要,我會分享。
如果只有一個分區,那么同一組中只有一個消費者能夠處理該分區。
理想情況下,您自己創建主題,而不是讓代理自動創建。 事實上,建議禁用該設置。
您的其他選擇是使用不同的組或調用兩種不同的處理方法但使用一個組
@Bean
@Qualifier("demoConsumerProcessor")
public Consumer<KStream<String, String>> demoConsumerProcessor(){
return input -> input.foreach(((key, value) -> {
System.out.println(value);
System.out.println("second consumer " + value)))
}
})) ;
我找到的解決方案並不理想,但它完成了工作-
我要求經紀人在創建其 bean 時創建主題-
this.embeddedKafkaBroker = new EmbeddedKafkaBroker(1, true, 2,"demoTopic")
.kafkaPorts(KAFKA_PORT)
.zkPort(ZOOKEEPER_PORT)
.brokerProperties(brokerProperties);
return embeddedKafkaBroker;
現在雙方消費者都收到了記錄。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.