简体   繁体   中英

Getting TimeoutException for some messages while sending to Kafka topic

Exception Stacktrace:
org.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for ****-656 due to 30037 ms has passed since batch creation plus linger time
      at org.springframework.kafka.core.KafkaTemplate$1.onCompletion(KafkaTemplate.java:255) ~[spring-kafka-1.1.6.RELEASE.jar!/:?]
      at org.apache.kafka.clients.producer.internals.RecordBatch.done(RecordBatch.java:109) ~[kafka-clients-0.10.1.1.jar!/:?]
      at org.apache.kafka.clients.producer.internals.RecordBatch.maybeExpire(RecordBatch.java:160) ~[kafka-clients-0.10.1.1.jar!/:?]
      at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortExpiredBatches(RecordAccumulator.java:245) ~[kafka-clients-0.10.1.1.jar!/:?]
      at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:212) ~[kafka-clients-0.10.1.1.jar!/:?]
      at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135) ~[kafka-clients-0.10.1.1.jar!/:?]
      at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]

Received above exception in PROD environment on very first day of deployment for some of the kafka messages. Backout the changes from PROD. In Stage env, I never seen that exception while testing. Once I am able to reproduce the exception but that was only once, I might have ran 10 times. Now I don't have any direction on How to find RCA for this issue?

I am posting the Kafka Sender Configuration as below,

retries=3
retryBackoffMS=500
lingerMS=30
autoFlush=true
acksConfig=all
kafkaServerConfig=***<Can't post here>
reconnectBackoffMS=200
compressionType=snappy
batchSize=1000000
maxBlockMS=500000
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
            <version>1.1.8.RELEASE</version>
        </dependency>

Th exception basically says the records in the buffer reaches the timeout.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-91+Provide+Intuitive+User+Timeouts+in+The+Producer?source=post_page-----fa3910d9aa54----------------------#KIP-91ProvideIntuitiveUserTimeoutsinTheProducer-TestPlan

In stg you don't see this exception is because prod env is busier.

Can you update your spring-kafka version? Your kafka client is far behind the newest version. https://mvnrepository.com/artifact/org.springframework.kafka/spring-kafka/1.1.8.RELEASE that uses kafka 0.10.x and now is already 2.3.x

If you can use the newest version, you can set delivery.timeout.ms higher

If you cannot upgrade to a newer version, you have to play around linger.ms and request.timeout.ms (Try increasing them)

Besides that, the default retries is max integer, and apparently your retries: 3 would not be very practical. If you don't want to reconnect all the time, 30 is more practical. https://docs.confluent.io/current/installation/configuration/producer-configs.html or https://kafka.apache.org/documentation/#producerconfigs

Note that both links point to the current version

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM