简体   繁体   English

什么是最简单的 Spring Kafka @KafkaListener 配置来使用一组压缩主题中的所有记录?

[英]What is the simplest Spring Kafka @KafkaListener configuration to consume all records from a set of compacted topics?

I have the names of several compacted Kafka topics ( topic1 , topic2 , ..., topicN ) defined in my spring application.yaml file.我的 spring application.yaml 文件中定义了几个压缩的 Kafka 主题( topic1topic2 ,..., topicN )的名称。 I want to be able to consume all of the records on each topic partition on startup.我希望能够在启动时使用每个主题分区上的所有记录 The number of partitions on each topic is not known in advance.事先不知道每个主题的分区数。

The official Spring Kafka 2.6.1 documentation suggests the simplest way to do this is to implement a PartitionFinder and use it in a SpEL expresssion to dynamically look up the number of partitions for a topic, and to then use a * wildcard in the partitions attribute of a @TopicPartition annotation (see Explicit Partition Assignment in the @KafkaListener Annotation documentation ):官方 Spring Kafka 2.6.1 文档建议最简单的方法是实现一个 PartitionFinder 并在 SpEL 表达式中使用它来动态查找主题的分区数,然后在分区属性中使用*通配符@TopicPartition注释的(请参阅@KafkaListener 注释文档中的显式分区分配):

@KafkaListener(topicPartitions = @TopicPartition(topic = "compacted",
            partitions = "#{@finder.partitions('compacted')}"),
            partitionOffsets = @PartitionOffset(partition = "*", initialOffset = "0")))
public void listen(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key, String payload) {
    // process record
}

Since I have several topics, the resulting code is very verbose:由于我有几个主题,因此生成的代码非常冗长:

@KafkaListener(topicPartitions = {
        @TopicPartition(
                topic = "${topic1}",
                partitions = "#{@finder.partitions('${topic1}')}",
                partitionOffsets = @PartitionOffset(partition = "*", initialOffset = "0")
        ),
        @TopicPartition(
                topic = "${topic2}",
                partitions = "#{@finder.partitions('${topic2}')}",
                partitionOffsets = @PartitionOffset(partition = "*", initialOffset = "0")
        ),
        // and many more @TopicPartitions...
        @TopicPartition(
                topic = "${topicN}",
                partitions = "#{@finder.partitions('${topicN}')}",
                partitionOffsets = @PartitionOffset(partition = "*", initialOffset = "0")
        )
})
public void listen(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key, String payload) {
    // process record
}

How can I make this repetitive configuration more concise by configuring the topicPartitions attribute of the @KafkaListener annotation with a dynamically generated array of @TopicPartion s (one for each of my N topics)?如何通过使用动态生成的@TopicPartion数组(我的 N 个主题中的每一个)配置@KafkaListener注释的topicPartitions属性来使这种重复配置更加简洁?

It's not currently possible with @KafkaListener - please open a new feature issue on GitHub.目前无法使用@KafkaListener - 请在 GitHub 上打开一个新功能问题。

The only work around I can think of is to programmatically create a listener container from the container factory and create a listener adapter.我能想到的唯一解决方法是以编程方式从容器工厂创建一个监听器容器并创建一个监听器适配器。 I can provide an example if you need it.如果您需要,我可以提供示例。

EDIT编辑

Here is an example:这是一个例子:

@SpringBootApplication
public class So64022266Application {

    public static void main(String[] args) {
        SpringApplication.run(So64022266Application.class, args);
    }

    @Bean
    public NewTopic topic1() {
        return TopicBuilder.name("so64022266-1").partitions(10).replicas(1).build();
    }

    @Bean
    public NewTopic topic2() {
        return TopicBuilder.name("so64022266-2").partitions(10).replicas(1).build();
    }

    @Bean
    ConcurrentMessageListenerContainer<String, String> container(@Value("${topics}") String[] topics,
            PartitionFinder finder,
            ConcurrentKafkaListenerContainerFactory<String, String> factory,
            MyListener listener) throws Exception {

        MethodKafkaListenerEndpoint<String, String> endpoint = endpoint(topics, finder, listener);
        ConcurrentMessageListenerContainer<String, String> container = factory.createListenerContainer(endpoint);
        container.getContainerProperties().setGroupId("someGroup");
        return container;
    }

    @Bean
    MethodKafkaListenerEndpoint<String, String> endpoint(String[] topics, PartitionFinder finder,
            MyListener listener) throws NoSuchMethodException {

        MethodKafkaListenerEndpoint<String, String> endpoint = new MethodKafkaListenerEndpoint<>();
        endpoint.setBean(listener);
        endpoint.setMethod(MyListener.class.getDeclaredMethod("listen", String.class, String.class));
        endpoint.setTopicPartitions(Arrays.stream(topics)
            .flatMap(topic -> finder.partitions(topic))
            .toArray(TopicPartitionOffset[]::new));
        endpoint.setMessageHandlerMethodFactory(methodFactory());
        return endpoint;
    }

    @Bean
    DefaultMessageHandlerMethodFactory methodFactory() {
        return new DefaultMessageHandlerMethodFactory();
    }

    @Bean
    public ApplicationRunner runner(KafkaTemplate<String, String> template,
            ConcurrentMessageListenerContainer<String, String> container) {

        return args -> {
            System.out.println(container.getAssignedPartitions());
            template.send("so64022266-1", "key1", "foo");
            template.send("so64022266-2", "key2", "bar");
        };
    }

}

@Component
class MyListener {

    public void listen(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key, String payload) {
        System.out.println(key + ":" + payload);
    }

}

@Component
class PartitionFinder {

    private final ConsumerFactory<String, String> consumerFactory;

    public PartitionFinder(ConsumerFactory<String, String> consumerFactory) {
        this.consumerFactory = consumerFactory;
    }

    public Stream<TopicPartitionOffset> partitions(String topic) {
        System.out.println("+" + topic + "+");
        try (Consumer<String, String> consumer = consumerFactory.createConsumer()) {
            return consumer.partitionsFor(topic).stream()
                    .map(part -> new TopicPartitionOffset(topic, part.partition(), 0L));
        }
    }

}
topics=so64022266-1, so64022266-2

If you need to deal with tombstone records ( null values) we need to enhance the handler factory;如果您需要处理逻辑删除记录( null值),我们需要增强处理程序工厂; we currently don't expose the framework's handler factory.我们目前不公开框架的处理程序工厂。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Spring Kafka:将 KafkaListenerErrorHandler 应用于所有 KafkaListener - Spring Kafka: applying KafkaListenerErrorHandler to all KafkaListener Spring KafkaListener with topicPattern,分配所有主题的所有分区 - Spring KafkaListener with topicPattern, assign all paritions of all topics 使用 KafkaListener 一次从 Kafka 消费最少 N 条消息 - Consume minimum N number of messages once from Kafka with KafkaListener 带有动态 @KafkaListener 的 Spring Kafka - Spring Kafka with Dynamic @KafkaListener 如何使用 spring 云 stream binder kafka 流依赖项使用协议缓冲区(protobuf)来使用来自 kafka 主题的消息? - How to consume messages from kafka topics using protocol buffers (protobuf) by spring cloud stream binder kafka streams dependency? Spring Kafka - 所有主题进入同一个消费者 - Spring Kafka - All topics into the same consumer 如何在 Java Spring 中使用没有 @KafkaListener 的 KafkaTemplate 使用来自 Kafka 的消息? RabbitMQ 模拟 - How I can consume message form Kafka using KafkaTemplate without @KafkaListener in Java Spring? RabbitMQ analogue 春天的kafka @KafkaListener没有被调用 - Spring kafka @KafkaListener is not being invoked 实现 Spring 服务,根据配置向不同的 Kafka 主题发送消息 - Implement Spring Service to send message to different Kafka topics based on configuration 使用 Spring Embedded Kafka 测试 @KafkaListener - Testing a @KafkaListener using Spring Embedded Kafka
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM