简体   繁体   中英

How can I consume from multiple Kafka topics that are associated with different brokers?

How can I consume from multiple Kafka topics that are associated with different brokers?

I have a Spring Boot application that needs to consume from 2 topics, but the topics are associated with different Brokers.

I am using Spring Kafka with the @Listener annotation, and I see there are ways to consume from 2 topics that are associated with the same broker, not different brokers. Unfortunately, I don't see anything helpful in the Spring Boot or Spring Kafka docs on how to do this.

There are a few ways to do this, and unfortunately Spring-Boot and Spring-Kafka do not make it clear the best practice to implement this. There are also lots of answers across SO that address consuming from multiple topics with the same Broker, and it isn't always as simple as that.

Method 1

The easiest way to solve this is to add a properties parameter in your Kafka Listener annotations :

@KafkaListener(topics = ["\${topic-1-name}"], properties = ["bootstrap.servers=\${bootstrap-server-1}"])
fun topic1Listener(@Payload messages: List<String>, ack: Acknowledgment){
    // Do work
}

@KafkaListener(topics = ["\${topic-2-name}"], properties = ["bootstrap.servers=\${bootstrap-server-2}"])
fun topic2Listener(@Payload messages: List<String>, ack: Acknowledgment){
    // Do work
}

Whatever key/value pair we specify in the properties param value will override the default key/value in the DefaultKafkaConsumerFactory. In this case, we override the bootstrap.servers property to our own specific bootstrap server address for each topic.

However, we can still use the Spring-Boot features that are “nice to have”, such as auto-topic creation and allowing Spring-Boot to setup a group-id for our application. We just need to leave our group-id parameter in our application.properties or application.yml file.

spring:
  kafka:
    consumer:
      group-id: group-id-of-your-choice
  • Note that we can use the same group-id for both of our consumers, even though they may span across multiple brokers. It is actually a good practice to have 1 group-id for your entire application, that way monitoring consumer lag, among other metrics, becomes simple.
  • Also note we do not store our topic names in the Spring configuration section anymore, we need to do this elsewhere, as we do not want Spring-Boot configuring our topics with the incorrect broker addresses. We let our Listeners handle that part when we override the properties, as shown above.

There are lots of other ways to accomplish this, however this is the simplest way I have found and tested to work.

Method 2

Other methods include creating your own custom ConsumerFactory and KafkaListenerContainerFactory objects, then configuring the properties in each Factory to use the bootstrap servers of your choice. However, the 1st method is much cleaner and simpler, using the default Container Factory. Below is how to create custom Factories with your own properties.

@Bean
fun ConsumerFactory1(): DefaultKafkaConsumerFactory<String, String> {
        val props = mutableMapOf<String, Any>()
        props[ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG] = bootStrapServers1!!
        props[ConsumerConfig.GROUP_ID_CONFIG] = groupId!!
        props[ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG] = StringDeserializer::class.java
        props[ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG] = StringDeserializer::class.java
        return DefaultKafkaConsumerFactory(props)
}

@Bean
fun ContainerFactory1(): ConcurrentKafkaListenerContainerFactory<String, String>? {
        val factory: ConcurrentKafkaListenerContainerFactory<String, String> = ConcurrentKafkaListenerContainerFactory()
        factory.consumerFactory = ConsumerFactory1()
        factory.containerProperties.ackMode = ContainerProperties.AckMode.MANUAL
        factory.isBatchListener = true
        return factory
}

@Bean
fun ConsumerFactory2(): DefaultKafkaConsumerFactory<Any?, Any?> {
        val props = mutableMapOf<String, Any>()
        props[ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG] = bootStrapServers2!!
        props[ConsumerConfig.GROUP_ID_CONFIG] = groupId!!
        props[ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG] = StringDeserializer::class.java
        props[ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG] = StringDeserializer::class.java

}

@Bean
fun ContainerFactory2(): ConcurrentKafkaListenerContainerFactory<String, String> {
        val factory: ConcurrentKafkaListenerContainerFactory<String, String> = ConcurrentKafkaListenerContainerFactory()
        factory.consumerFactory = ConsumerFactory2()
        factory.containerProperties.ackMode = ContainerProperties.AckMode.MANUAL
        factory.isBatchListener = true
        return factory
}

A few things to unpack here.

  • The container factories are essentially the same, we just use their respective Consumer Factories with the relevant properties for each.
  • I used the built in StringDeserializer as my application consumes messages as strings, and then uses Jackson to serialize the Json string into an object. Your application may need a different deserializer, or even a custom deserializer depending on how data is serialized on the topic.
  • Setting the AckMode to MANUAL allows us to take control over when we acknowledge we have consumed a message from the topic.
  • Setting the batch listener to true allows our listener to listen for messages in batches rather than 1 at a time.
  • With this implementation, we are completely ripping Spring-Boot out of our app, in terms of using Kafka. So our @Listener annotations is going to look a bit different:
@KafkaListener(topics = ["\${kafka-topic-1}"], containerFactory = "ContainerFactory1", groupId = "\${kafka.group-id}")
  • We no longer let Spring-Boot configure our Group-Id for us so we need to specify that in the Listener now. This means there is no Spring.Kafka.Consumer properties defined in your application.properties files anymore, we need to do this programmatically. We will now need to manually configure some other things, such as auto-configuring topics upon startup, if you need that functionality you will need to manually setup a KafkaAdmin bean .

Conclusion

There are even more ways to accomplish implementing this, and I know others have come up with good solutions too, sometimes it all depends on what you need for your application. This is just 2 of the solutions I have found to be successful with, and Method 1 is easy to understand, implement and test without getting too involved with the depth of Spring-Boot and Spring-Kafka. These methods will work with more than just 2 brokers if that is functionality you need.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM