[英]How to achieve high performance of Spring Kafka Consumer
How do I increase the performance of Kafka consumer ?I have(and need) Atleast Once Kafka Consumer semantics我如何提高 Kafka 消费者的性能?我有(并且需要)至少一次 Kafka 消费者语义
I have the below configuration.The processInDB() takes 2 minutes to complete .So just to process 10 messages(all in single partition) its taking 20 minutes(assuming 2 minutes per message).我有以下配置。 processInDB() 需要 2 分钟才能完成。所以只是处理 10 条消息(全部在单个分区中)需要 20 分钟(假设每条消息 2 分钟)。 I can call processInDB in different thread but I can lose messages !.How can I process all 10 messages between 2 to 4 minutes window ?
我可以在不同的线程中调用 processInDB,但我可能会丢失消息!如何在 2 到 4 分钟窗口之间处理所有 10 条消息?
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "grpid-mytopic120112141");
props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 10);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
ConcurrentKafkaListenerContainerFactory<String, ValidatedConsumerClass> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckMode(AckMode.RECORD);
factory.setErrorHandler(errorHandler());
Below is my Kafka Consumer code.下面是我的 Kafka 消费者代码。
@KafkaListener(id = "foo", topics = "mytopic-3", concurrency = "6", groupId = "mytopic-1-groupid")
public void consumeFromTopic1(@Payload @Valid ValidatedConsumerClass message, ConsumerRecordMetadata c) {
dbservice.processInDB(message);
}
Using a batch listener would help - you just need to hold up the consumer thread in the listener until all the individual records have completed processing.使用批处理侦听器会有所帮助 - 您只需要在侦听器中暂停消费者线程,直到所有单个记录都完成处理。
In the next release (2.8.0-M1 milestone released today) there is support for out-of-order manual acknowledgments where the framework defers the commits until the "gaps are filled" https://docs.spring.io/spring-kafka/docs/2.8.0-M1/reference/html/#x28-ooo-commits在下一个版本(今天发布的 2.8.0-M1 里程碑)中,支持无序手动确认,其中框架将提交推迟到“填补空白” https://docs.spring.io/spring- kafka/docs/2.8.0-M1/reference/html/#x28-ooo-commits
Another suggestion not purely related to spring kafka, as you stated in your tags that your also exploring the consumer api and not only spring kafka, so I am allowing to myself to suggest it here, you might want to test out this api另一个与 spring kafka 不完全相关的建议,正如您在标签中所述,您也在探索消费者 api 而不仅仅是 spring kafka,所以我允许自己在这里提出建议,您可能想测试这个 api
https://www.confluent.io/blog/introducing-confluent-parallel-message-processing-client/ https://www.confluent.io/blog/introducing-confluent-parallel-message-processing-client/
https://github.com/confluentinc/parallel-consumer https://github.com/confluentinc/parallel-consumer
But as stated in my earlier comments , you might just want to make more partitions但正如我之前的评论中所述,您可能只想创建更多分区
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.