[英]Getting TimeoutException for some messages while sending to Kafka topic
Exception Stacktrace:
org.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for ****-656 due to 30037 ms has passed since batch creation plus linger time
at org.springframework.kafka.core.KafkaTemplate$1.onCompletion(KafkaTemplate.java:255) ~[spring-kafka-1.1.6.RELEASE.jar!/:?]
at org.apache.kafka.clients.producer.internals.RecordBatch.done(RecordBatch.java:109) ~[kafka-clients-0.10.1.1.jar!/:?]
at org.apache.kafka.clients.producer.internals.RecordBatch.maybeExpire(RecordBatch.java:160) ~[kafka-clients-0.10.1.1.jar!/:?]
at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortExpiredBatches(RecordAccumulator.java:245) ~[kafka-clients-0.10.1.1.jar!/:?]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:212) ~[kafka-clients-0.10.1.1.jar!/:?]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135) ~[kafka-clients-0.10.1.1.jar!/:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
Received above exception in PROD environment on very first day of deployment for some of the kafka messages.在部署某些 kafka 消息的第一天,就在PROD环境中收到了上述异常。 Backout the changes from PROD.
从 PROD 中撤销更改。 In Stage env, I never seen that exception while testing.
在Stage env 中,我在测试时从未见过该异常。 Once I am able to reproduce the exception but that was only once, I might have ran 10 times.
一旦我能够重现异常但那只是一次,我可能已经跑了 10 次。 Now I don't have any direction on How to find RCA for this issue?
现在我对如何为这个问题找到 RCA 没有任何指示?
I am posting the Kafka Sender Configuration as below,我正在发布 Kafka 发件人配置,如下所示,
retries=3
retryBackoffMS=500
lingerMS=30
autoFlush=true
acksConfig=all
kafkaServerConfig=***<Can't post here>
reconnectBackoffMS=200
compressionType=snappy
batchSize=1000000
maxBlockMS=500000
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>1.1.8.RELEASE</version>
</dependency>
Th exception basically says the records in the buffer reaches the timeout.异常基本上表示缓冲区中的记录达到超时。
https://cwiki.apache.org/confluence/display/KAFKA/KIP-91+Provide+Intuitive+User+Timeouts+in+The+Producer?source=post_page-----fa3910d9aa54----------------------#KIP-91ProvideIntuitiveUserTimeoutsinTheProducer-TestPlan https://cwiki.apache.org/confluence/display/KAFKA/KIP-91+Provide+Intuitive+User+Timeouts+in+The+Producer?source=post_page-----fa3910d9aa54------- ---------------#KIP-91ProvideIntuitiveUserTimeoutsinTheProducer-TestPlan
In stg you don't see this exception is because prod env is busier.在 stg 你看不到这个异常是因为 prod env 更忙。
Can you update your spring-kafka version?你能更新你的 spring-kafka 版本吗? Your kafka client is far behind the newest version.
您的 kafka 客户端远远落后于最新版本。 https://mvnrepository.com/artifact/org.springframework.kafka/spring-kafka/1.1.8.RELEASE that uses kafka 0.10.x and now is already 2.3.x
https://mvnrepository.com/artifact/org.springframework.kafka/spring-kafka/1.1.8.RELEASE使用 kafka 0.10.x 现在已经是 2.3.x
If you can use the newest version, you can set delivery.timeout.ms
higher如果可以使用最新版本,可以将
delivery.timeout.ms
设置的更高一些
If you cannot upgrade to a newer version, you have to play around linger.ms
and request.timeout.ms
(Try increasing them)如果您无法升级到较新的版本,则必须玩
linger.ms
和request.timeout.ms
(尝试增加它们)
Besides that, the default retries is max integer, and apparently your retries: 3 would not be very practical.除此之外,默认重试次数为最大 integer,显然您的重试次数:3 不太实用。 If you don't want to reconnect all the time, 30 is more practical.
如果不想一直重连,30比较实用。 https://docs.confluent.io/current/installation/configuration/producer-configs.html or https://kafka.apache.org/documentation/#producerconfigs
https://docs.confluent.io/current/installation/configuration/producer-configs.html or https://kafka.apache.org/documentation/#producerconfigs
Note that both links point to the current version请注意,两个链接都指向当前版本
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.