简体   繁体   English

Kafka消费者配置/性能问题

[英]Kafka consumer configuration / performance issues

I'm trying out kafka as an alternative to AWS SQS. 我正在尝试将kafka作为AWS SQS的替代品。 The motivation primarily is to improve performance where kafka would eliminate the constraint of pulling 10 messages at a time with a cap of 256kb. 动机主要是提高性能,其中kafka将消除限制,一次性提取10条消息,上限为256kb。 Here's a high-level scenario of my use case. 这是我的用例的高级场景。 I've a bunch of crawlers which are sending documents for indexing. 我有一堆爬虫正在发送索引文件。 The size of the payload is around 1 mb on average. 有效载荷的大小平均约为1 MB。 The crawlers call a SOAP end-point which in turn runs a producer code to submit the messages to a kafka queue. 爬虫调用SOAP端点,后者又运行生产者代码以将消息提交给kafka队列。 The consumer app picks up the messages and processes them. 消费者应用程序接收消息并处理它们。 For my test box, I've configured the topic with 30 partitions with 2 replication. 对于我的测试框,我已经为主题配置了30个分区和2个复制。 The two kafka instances are running with 1 zookeeper instance. 两个kafka实例正在运行1个zookeeper实例。 The kafka version is 0.10.0. 卡夫卡版本是0.10.0。

For my testing, I published 7 million messages in the queue. 对于我的测试,我在队列中发布了700万条消息。 I created a consumer group with 30 consumer thread , one per partition. 我创建了一个包含30个消费者线程的消费者组,每个分区一个。 I was initially under the impression that this would substantially speed up the processing power compared to what I was getting via SQS. 我最初的印象是,与我通过SQS获得的相比,这将大大加快处理能力。 Unfortunately, that was not to be the case. 不幸的是,事实并非如此。 In my case, the processing of data is complex and takes up 1-2 minutes on average to complete.That lead to a flurry of partition rebalancing as the threads were not able to heartbeat on time. 就我而言,数据处理很复杂,平均需要1-2分钟才能完成。由于线程无法按时心跳,导致一系列分区重新平衡。 I could see a bunch of messages in the log citing 我可以在日志引用中看到一堆消息

Auto offset commit failed for group full_group: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. 组full_group的自动偏移提交失败:由于该组已经重新平衡并将分区分配给另一个成员,因此无法完成提交。 This means that the time between subsequent calls to poll() was longer than the configured session.timeout.ms, which typically implies that the poll loop is spending too much time message processing. 这意味着后续调用poll()的时间长于配置的session.timeout.ms,这通常意味着轮询循环花费了太多时间进行消息处理。 You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in the poll() with max.poll.records. 您可以通过增加会话超时或通过max.poll.records减少poll()中返回的批量的最大大小来解决此问题。

This lead to the same message being processed multiple times. 这导致多次处理相同的消息。 I tried playing around with session timeout, max.poll.records and poll time to avoid this, but that slowed down the overall processing bigtime. 我尝试使用会话超时,max.poll.records和轮询时间来避免这种情况,但这会减慢整个处理时间。 Here's some of the configuration parameter. 这是一些配置参数。


metadata.max.age.ms = 300000
max.partition.fetch.bytes = 1048576
bootstrap.servers = [kafkahost1:9092, kafkahost2:9092]
enable.auto.commit = true
max.poll.records = 10000
request.timeout.ms = 310000
heartbeat.interval.ms = 100000
auto.commit.interval.ms = 1000
receive.buffer.bytes = 65536
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class com.autodesk.preprocessor.consumer.serializer.KryoObjectSerializer
group.id = full_group
retry.backoff.ms = 100
fetch.max.wait.ms = 500
connections.max.idle.ms = 540000
session.timeout.ms = 300000
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
metrics.sample.window.ms = 30000
auto.offset.reset = latest
I reduced the consumer poll time to 100 ms. 我将消费者轮询时间减少到100毫秒。 It reduced the rebalancing issues, eliminated duplicate processing but slowed down the overall process significantly. 它减少了重新平衡问题,消除了重复处理,但显着减慢了整个过程。 It ended up taking 35 hours to complete processing all 6 million messages compared to 25 hours using the SQS based solution. 与使用基于SQS的解决方案的25小时相比,最终花了35个小时完成所有600万条消息的处理。 Each consumer thread on average retrieved 50-60 messages per poll, though some of them polled 0 records at times. 每个消费者线程平均每次轮询检索50-60条消息,尽管其中一些有时轮询0条记录。 I'm not sure about this behavior when there are a huge amount messages available in the partition. 当分区中有大量消息时,我不确定这种行为。 The same thread was able to pick up messages during the subsequent iteration. 同一个线程能够在后续迭代期间获取消息。 Could this be due to rebalancing ? 这可能是由于重新平衡?

Here's my consumer code 这是我的消费者代码

 while (true) { try{ ConsumerRecords records = consumer.poll(100); for (ConsumerRecord record : records) { if(record.value()!=null){ TextAnalysisRequest textAnalysisObj = record.value(); if(textAnalysisObj!=null){ // Process record PreProcessorUtil.submitPostProcessRequest(textAnalysisObj); } } } }catch(Exception ex){ LOGGER.error("Error in Full Consumer group worker", ex); } 
I understanding that record processing part is one bottleneck in my case. 我理解记录处理部分是我的一个瓶颈。 But I'm sure a few folks here have a similar use case of dealing with large processing time. 但我相信这里的一些人有类似处理大处理时间的用例。 I thought of doing an async processing by spinning each processor in it's dedicated thread or use a thread pool with large capacity, but not sure if it would create a big load in the system. 我想通过旋转其专用线程中的每个处理器或使用大容量的线程池来进行异步处理,但不确定它是否会在系统中产生很大的负载。 At the same time, I've seen a couple of instances where people have used pause and resume API to perform the processing in order to avoid rebalancing issue. 与此同时,我看到过一些人们使用暂停和恢复API来执行处理以避免重新平衡问题的情况。

I'm really looking for some advice / best practice in this circumstance. 在这种情况下,我真的在寻找一些建议/最佳实践。 Particularly, the recommended configuration setting around hearbeat, request timeout, max poll records, auto commit interval, poll interval, etc. if kafka is not the right tool for my use case, please let me know as well. 特别是,推荐的配置设置围绕听力,请求超时,最大轮询记录,自动提交间隔,轮询间隔等,如果kafka不是我的用例的正确工具,请让我知道。

You can start by processing messages asynchronously, in a separate thread than the thread that reads from Kafka. 您可以在与从Kafka读取的线程不同的线程中异步处理消息。 This way auto committing will be very fast and Kafka will not cut your session. 这样自动提交将非常快,Kafka不会削减您的会话。 Something like this: 像这样的东西:

    private final BlockingQueue<TextAnalysisRequest> requests = 
new LinkedBlockingQueue();

In the reading thread: 在阅读帖子中:

while (true) {
    try{
        ConsumerRecords records = consumer.poll(100);
        for (ConsumerRecord record : records) {
            if(record.value()!=null){
                TextAnalysisRequest textAnalysisObj = record.value();
                if(textAnalysisObj!=null){
                    // Process record
                    requests.offer(textAnalysisObj);
                }
            }
     }    
}
catch(Exception ex){
    LOGGER.error("Error in Full Consumer group worker", ex);
}

In the processing thread: 在处理线程中:

            while (!Thread.currentThread().isInterrupted()) {
                try {
                    TextAnalysisRequest textAnalysisObj = requests.take();
                    PreProcessorUtil.submitPostProcessRequest(textAnalysisObj);
                } catch (InterruptedException e) {
                    LOGGER.info("Process thread interrupted", e);
                    Thread.currentThread().interrupt();
                } catch (Throwable t) {
                    LOGGER.warn("Unexpected throwable while processing.", t);
                }
            }

Take a look also at this documentation, for a strategy to send large messages through Kafka: http://blog.cloudera.com/blog/2015/07/deploying-apache-kafka-a-practical-faq/ 另请参阅此文档,了解通过Kafka发送大量消息的策略: http//blog.cloudera.com/blog/2015/07/deploying-apache-kafka-a-practical-faq/

In short it says that Kafka perform best on small size messages of around 10K, and if you need to send larger messages, it's better to put them on a network storage, and send through Kafka just their location, or split them. 简而言之,它表示Kafka在大约10K的小尺寸消息上表现最佳,如果您需要发送更大的消息,最好将它们放在网络存储上,然后通过Kafka发送它们的位置,或者拆分它们。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM