[英]What is the simplest Spring Kafka @KafkaListener configuration to consume all records from a set of compacted topics?
I have the names of several compacted Kafka topics ( topic1
, topic2
, ..., topicN
) defined in my spring application.yaml file.我的 spring application.yaml 文件中定义了几个压缩的 Kafka 主题( topic1
, topic2
,..., topicN
)的名称。 I want to be able to consume all of the records on each topic partition on startup.我希望能够在启动时使用每个主题分区上的所有记录。 The number of partitions on each topic is not known in advance.事先不知道每个主题的分区数。
The official Spring Kafka 2.6.1 documentation suggests the simplest way to do this is to implement a PartitionFinder and use it in a SpEL expresssion to dynamically look up the number of partitions for a topic, and to then use a *
wildcard in the partitions attribute of a @TopicPartition
annotation (see Explicit Partition Assignment in the @KafkaListener Annotation documentation ):官方 Spring Kafka 2.6.1 文档建议最简单的方法是实现一个 PartitionFinder 并在 SpEL 表达式中使用它来动态查找主题的分区数,然后在分区属性中使用*
通配符@TopicPartition
注释的(请参阅@KafkaListener 注释文档中的显式分区分配):
@KafkaListener(topicPartitions = @TopicPartition(topic = "compacted",
partitions = "#{@finder.partitions('compacted')}"),
partitionOffsets = @PartitionOffset(partition = "*", initialOffset = "0")))
public void listen(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key, String payload) {
// process record
}
Since I have several topics, the resulting code is very verbose:由于我有几个主题,因此生成的代码非常冗长:
@KafkaListener(topicPartitions = {
@TopicPartition(
topic = "${topic1}",
partitions = "#{@finder.partitions('${topic1}')}",
partitionOffsets = @PartitionOffset(partition = "*", initialOffset = "0")
),
@TopicPartition(
topic = "${topic2}",
partitions = "#{@finder.partitions('${topic2}')}",
partitionOffsets = @PartitionOffset(partition = "*", initialOffset = "0")
),
// and many more @TopicPartitions...
@TopicPartition(
topic = "${topicN}",
partitions = "#{@finder.partitions('${topicN}')}",
partitionOffsets = @PartitionOffset(partition = "*", initialOffset = "0")
)
})
public void listen(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key, String payload) {
// process record
}
How can I make this repetitive configuration more concise by configuring the topicPartitions
attribute of the @KafkaListener
annotation with a dynamically generated array of @TopicPartion
s (one for each of my N topics)?如何通过使用动态生成的@TopicPartion
数组(我的 N 个主题中的每一个)配置@KafkaListener
注释的topicPartitions
属性来使这种重复配置更加简洁?
It's not currently possible with @KafkaListener
- please open a new feature issue on GitHub.目前无法使用@KafkaListener
- 请在 GitHub 上打开一个新功能问题。
The only work around I can think of is to programmatically create a listener container from the container factory and create a listener adapter.我能想到的唯一解决方法是以编程方式从容器工厂创建一个监听器容器并创建一个监听器适配器。 I can provide an example if you need it.如果您需要,我可以提供示例。
EDIT编辑
Here is an example:这是一个例子:
@SpringBootApplication
public class So64022266Application {
public static void main(String[] args) {
SpringApplication.run(So64022266Application.class, args);
}
@Bean
public NewTopic topic1() {
return TopicBuilder.name("so64022266-1").partitions(10).replicas(1).build();
}
@Bean
public NewTopic topic2() {
return TopicBuilder.name("so64022266-2").partitions(10).replicas(1).build();
}
@Bean
ConcurrentMessageListenerContainer<String, String> container(@Value("${topics}") String[] topics,
PartitionFinder finder,
ConcurrentKafkaListenerContainerFactory<String, String> factory,
MyListener listener) throws Exception {
MethodKafkaListenerEndpoint<String, String> endpoint = endpoint(topics, finder, listener);
ConcurrentMessageListenerContainer<String, String> container = factory.createListenerContainer(endpoint);
container.getContainerProperties().setGroupId("someGroup");
return container;
}
@Bean
MethodKafkaListenerEndpoint<String, String> endpoint(String[] topics, PartitionFinder finder,
MyListener listener) throws NoSuchMethodException {
MethodKafkaListenerEndpoint<String, String> endpoint = new MethodKafkaListenerEndpoint<>();
endpoint.setBean(listener);
endpoint.setMethod(MyListener.class.getDeclaredMethod("listen", String.class, String.class));
endpoint.setTopicPartitions(Arrays.stream(topics)
.flatMap(topic -> finder.partitions(topic))
.toArray(TopicPartitionOffset[]::new));
endpoint.setMessageHandlerMethodFactory(methodFactory());
return endpoint;
}
@Bean
DefaultMessageHandlerMethodFactory methodFactory() {
return new DefaultMessageHandlerMethodFactory();
}
@Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template,
ConcurrentMessageListenerContainer<String, String> container) {
return args -> {
System.out.println(container.getAssignedPartitions());
template.send("so64022266-1", "key1", "foo");
template.send("so64022266-2", "key2", "bar");
};
}
}
@Component
class MyListener {
public void listen(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key, String payload) {
System.out.println(key + ":" + payload);
}
}
@Component
class PartitionFinder {
private final ConsumerFactory<String, String> consumerFactory;
public PartitionFinder(ConsumerFactory<String, String> consumerFactory) {
this.consumerFactory = consumerFactory;
}
public Stream<TopicPartitionOffset> partitions(String topic) {
System.out.println("+" + topic + "+");
try (Consumer<String, String> consumer = consumerFactory.createConsumer()) {
return consumer.partitionsFor(topic).stream()
.map(part -> new TopicPartitionOffset(topic, part.partition(), 0L));
}
}
}
topics=so64022266-1, so64022266-2
If you need to deal with tombstone records ( null
values) we need to enhance the handler factory;如果您需要处理逻辑删除记录( null
值),我们需要增强处理程序工厂; we currently don't expose the framework's handler factory.我们目前不公开框架的处理程序工厂。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.