简体   繁体   中英

Kafka Streams rebalancing latency spikes on high throughput kafka-streams services

we are starting to work with Kafka streams, our service is a very simple stateless consumer.

We have tight requirements on latency, and we are facing too high latency problems when the consumer group is rebalancing. In our scenario, rebalancing will happen relatively often: rolling updates of code, scaling up/down the service, containers being shuffled by the cluster scheduler, containers dying, hardware failing.

One of the first tests we have done is having a small consumer group with 4 consumers handling a small amount of messages (1K/sec) and killing one of them; the cluster manager (currently AWS-ECS, probably soon moving to K8S) starts a new one. So, more than one rebalancing is done.

Our most critical metric is latency, which we measure as the milliseconds between message creation in the publisher and message consumption in the subscriber. We saw the maximum latency spiking from a few milliseconds, to almost 15 seconds.

延迟等待时间

端到端延迟

处理的消息/秒

We also have done tests with some rolling updates of code and the results are worse, since our deployment is not prepared for Kafka services and we trigger a lot of rebalancings. We'll need to work on that, but wondering what are the strategies followed by other people for doing code deployment / autoscaling with the minimum possible delays.

Not sure it might help, but our requirements are pretty relaxed related to message processing: we don't care about some messages being processed twice from time to time, or are very strict with the ordering of messages.

We are using all default configurations, no tuning.

We need to improve this latency spikes during rebalancing. Can someone, please, give us some hints on how to work on it? Is touching configurations enough? Do we need to use some concrete parition Asignor? Implement our own?

What is the recommended approach to code deployment / autoscaling with the minimum possible delays?

Our Kafka version is 1.1.0, after looking at libs found for example kafka/kafka_2.11-1.1.0-cp1.jar, we installed Confluent platform 4.1.0. In the consumer side, we are using Kafka-streams 2.1.0.

Thank you for reading my question and your responses.

If the gap is introduced mainly from the rebalance, meaning that not triggering the rebalance but just left AWS / K8s to do their work and resume the bounced instance and pay the unavailability period of time during the bounce --- note that for stateless instances this is usually better, while for stateful applications you'd better make sure the restarted instance can access to its associated storage so that it can save on bootstrapping from the changelog.

To do that:

In Kafka 1.1, to reduce the unnecessary rebalance you can increase the session timeout of the group so that coordinator became "less sensitive" about members not responding with heartbeats --- note that we disabled the leave.group request since 0.11.0 for Streams' consumers ( https://issues.apache.org/jira/browse/KAFKA-4881 ) so if we have a longer session timeout, the member leaving the group would not trigger rebalance, though member rejoining would still trigger one. Still one rebalance less is better than none.

In the coming Kafka 2.2 though, we've done a big improvement on optimizing rebalance scenarios, primarily captured in KIP-345 ( https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances ). With that much fewer rebalances will be triggered with a rolling bounce, with a reasonable config settings introduced in KIP-345. So I'd strongly recommend you to upgrade to 2.2 and see if it helps your case

There are several configuration changes required in order to significantly decrease rebalance latency, especially during deployment rollout

1.Keep the latest version of Kafka-Streams

Kafka-Streams rebalance performance becomes better and better over time. A feature improvement that worth highlighting is Incremental cooperative rebalancing protocol . Kafka-Streams has this feature out of the box (since version 2.4.0, and with some improvements at 2.6.0), with default partition assignor StreamsPartitionAssignor .

2.Add Kafka-Streams configuration property internal.leave.group.on.close = true for sending consumer leave group request on app shutdown

By default, Kafka-Streams doesn't send consumer leave group request on app graceful shutdown, and, as a result, messages from some partitions (that were assigned to terminating app instance) will not be processed until session by this consumer will expire (with duration session.timeout.ms ), and only after expiration, new rebalance will be triggered. In order to change such default behavior, we should use the internal Kafka Streams config property internal.leave.group.on.close = true (this property should be added during Kafka Streams creation new KafkaStreams(streamTopology, properties) ). As the property is private, be careful and double-check before upgrading to a new version if the config is still there.

3.Decrease the number of simultaneously restarted app instances during deployment rollout

Using Kubernetes, we could control how many app instances are created with a new deployment at the same time. It's achievable by using properties max surge and max unavailable . If we have tens of app instances, default configuration will rollout multiple new instances and at the same time, multiple instances will be terminating. It means that multiple partitions will require reassignment to other app instances, and multiple rebalances will be fired, and it will lead to significant rebalance latency. The most preferable configuration for decreasing rebalance duration is changing these configurations to max surge = 1 and max unavailable = 0 .

4.Increase the number of topic partitions and app instances with a slight excess

Having a higher number of partitions will lead to decreased throughput per single partition. Also, having a higher number of app instances, restart of a single one will lead to smaller Kafka lag during rebalancing. Also, make sure that you don't have frequent up-scaling and down-scaling of app instances (as it triggers rebalances). If you have a few up-scaling and down-scaling per hour, seems it's not a good configuration for a minimal number of instances, so you need to increase it.

For more details please take a look at the article Kafka-Streams - Tips on How to Decrease Re-Balancing Impact for Real-Time Event Processing On Highly Loaded Topics

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM