[英]Spring Kafka: applying KafkaListenerErrorHandler to all KafkaListener
[英]What is the simplest Spring Kafka @KafkaListener configuration to consume all records from a set of compacted topics?
我的 spring application.yaml 文件中定義了幾個壓縮的 Kafka 主題( topic1
, topic2
,..., topicN
)的名稱。 我希望能夠在啟動時使用每個主題分區上的所有記錄。 事先不知道每個主題的分區數。
官方 Spring Kafka 2.6.1 文檔建議最簡單的方法是實現一個 PartitionFinder 並在 SpEL 表達式中使用它來動態查找主題的分區數,然后在分區屬性中使用*
通配符@TopicPartition
注釋的(請參閱@KafkaListener 注釋文檔中的顯式分區分配):
@KafkaListener(topicPartitions = @TopicPartition(topic = "compacted",
partitions = "#{@finder.partitions('compacted')}"),
partitionOffsets = @PartitionOffset(partition = "*", initialOffset = "0")))
public void listen(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key, String payload) {
// process record
}
由於我有幾個主題,因此生成的代碼非常冗長:
@KafkaListener(topicPartitions = {
@TopicPartition(
topic = "${topic1}",
partitions = "#{@finder.partitions('${topic1}')}",
partitionOffsets = @PartitionOffset(partition = "*", initialOffset = "0")
),
@TopicPartition(
topic = "${topic2}",
partitions = "#{@finder.partitions('${topic2}')}",
partitionOffsets = @PartitionOffset(partition = "*", initialOffset = "0")
),
// and many more @TopicPartitions...
@TopicPartition(
topic = "${topicN}",
partitions = "#{@finder.partitions('${topicN}')}",
partitionOffsets = @PartitionOffset(partition = "*", initialOffset = "0")
)
})
public void listen(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key, String payload) {
// process record
}
如何通過使用動態生成的@TopicPartion
數組(我的 N 個主題中的每一個)配置@KafkaListener
注釋的topicPartitions
屬性來使這種重復配置更加簡潔?
目前無法使用@KafkaListener
- 請在 GitHub 上打開一個新功能問題。
我能想到的唯一解決方法是以編程方式從容器工廠創建一個監聽器容器並創建一個監聽器適配器。 如果您需要,我可以提供示例。
編輯
這是一個例子:
@SpringBootApplication
public class So64022266Application {
public static void main(String[] args) {
SpringApplication.run(So64022266Application.class, args);
}
@Bean
public NewTopic topic1() {
return TopicBuilder.name("so64022266-1").partitions(10).replicas(1).build();
}
@Bean
public NewTopic topic2() {
return TopicBuilder.name("so64022266-2").partitions(10).replicas(1).build();
}
@Bean
ConcurrentMessageListenerContainer<String, String> container(@Value("${topics}") String[] topics,
PartitionFinder finder,
ConcurrentKafkaListenerContainerFactory<String, String> factory,
MyListener listener) throws Exception {
MethodKafkaListenerEndpoint<String, String> endpoint = endpoint(topics, finder, listener);
ConcurrentMessageListenerContainer<String, String> container = factory.createListenerContainer(endpoint);
container.getContainerProperties().setGroupId("someGroup");
return container;
}
@Bean
MethodKafkaListenerEndpoint<String, String> endpoint(String[] topics, PartitionFinder finder,
MyListener listener) throws NoSuchMethodException {
MethodKafkaListenerEndpoint<String, String> endpoint = new MethodKafkaListenerEndpoint<>();
endpoint.setBean(listener);
endpoint.setMethod(MyListener.class.getDeclaredMethod("listen", String.class, String.class));
endpoint.setTopicPartitions(Arrays.stream(topics)
.flatMap(topic -> finder.partitions(topic))
.toArray(TopicPartitionOffset[]::new));
endpoint.setMessageHandlerMethodFactory(methodFactory());
return endpoint;
}
@Bean
DefaultMessageHandlerMethodFactory methodFactory() {
return new DefaultMessageHandlerMethodFactory();
}
@Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template,
ConcurrentMessageListenerContainer<String, String> container) {
return args -> {
System.out.println(container.getAssignedPartitions());
template.send("so64022266-1", "key1", "foo");
template.send("so64022266-2", "key2", "bar");
};
}
}
@Component
class MyListener {
public void listen(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key, String payload) {
System.out.println(key + ":" + payload);
}
}
@Component
class PartitionFinder {
private final ConsumerFactory<String, String> consumerFactory;
public PartitionFinder(ConsumerFactory<String, String> consumerFactory) {
this.consumerFactory = consumerFactory;
}
public Stream<TopicPartitionOffset> partitions(String topic) {
System.out.println("+" + topic + "+");
try (Consumer<String, String> consumer = consumerFactory.createConsumer()) {
return consumer.partitionsFor(topic).stream()
.map(part -> new TopicPartitionOffset(topic, part.partition(), 0L));
}
}
}
topics=so64022266-1, so64022266-2
如果您需要處理邏輯刪除記錄( null
值),我們需要增強處理程序工廠; 我們目前不公開框架的處理程序工廠。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.