简体   繁体   中英

Kafka Streams with processing.guarantee set up to EXACTLY_ONCE issue

I'm working on a development environment with 3 (dockerized) kafka brokers on my system. The brokers have transaction.state.log.replication.factor set up to 3.

In stream application config I set processing.guarantee as EXACTLY_ONCE and in consumer application config I set isolation.level as "read_committed".

I have checked other configurations on https://docs.confluent.io/current/streams/developer-guide/config-streams.html#processing-guarantee and I set up my environment according to the guide.

After a minute of message production from stream application that reads a state store and produces 100 message using context.forward(..) function, the consumer application stops reading, as if there wasn't any commited messages on the assigned partitions.

After some time the stream application crashes with the following error:

"Aborting producer batches due to fatal error org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker."

It seems like the stream producer cannot receive the ack and the transaction expires.

Edit 1: When I stop the stream application, the consumer receives commited messages.

推进Kafka服务器和客户端版本似乎解决了问题

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM