簡體   English   中英

面對原因:org.apache.kafka.clients.consumer.CommitFailedException:

[英]Facing Caused by: org.apache.kafka.clients.consumer.CommitFailedException:

Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member.
    This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing.
    You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
java.lang.IllegalStateException: This error handler cannot process 'org.apache.kafka.clients.consumer.CommitFailedException's; no record information is available
    at org.springframework.kafka.listener.DefaultErrorHandler.handleOtherException(DefaultErrorHandler.java:155) ~[spring-kafka-2.8.6.jar:2.8.6]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.handleConsumerException(KafkaMessageListenerContainer.java:1791) [spring-kafka-2.8.6.jar:2.8.6]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1298) [spring-kafka-2.8.6.jar:2.8.6]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_321]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_321]
    at java.lang.Thread.run(Thread.java:750) [na:1.8.0_321]
Caused by: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the part

這是生產者和消費者屬性。

消費者財產

#set the server port server.port=8004 #Kafka properties spring.kafka.bootstrap-servers=kakfa server instance spring.kafka.properties.security.protocol=SASL_SSL spring.kafka.properties.sasl.mechanism=PLAIN spring.kafka.properties.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule\ \ required username="****" password="****"; spring.kafka.producer.properties.enable.idempotence=false spring.kafka.consumer.key-deserializer= org.apache.kafka.common.serialization.StringDeserializer spring.kafka.consumer.value-deserializer= org.springframework.kafka.support.serializer.JsonDeserializer spring.kafka.producer.key-serializer= org.apache.kafka.common.serialization.StringSerializer spring.kafka.producer.value-serializer= org.springframework.kafka.support.serializer.JsonSerializer ssl.endpoint.identification.algorithm= spring.kafka.properties.ssl.truststore.type=JKS spring.kafka.properties.ssl.truststore.location=C://Users//Public//Projects//kafka.jks spring.kafka.properties.ssl.truststore.password=**** request.topic=req_topic consumer.group.id=consumer_group_id spring.kafka.consumer.properties.spring.json.trusted.packages=*

生產者屬性

#set the server port server.port=8003 #Kafka broker properties spring.kafka.bootstrap-servers=kakfa server instance spring.kafka.properties.security.protocol=SASL_SSL spring.kafka.properties.sasl.mechanism=PLAIN spring.kafka.properties.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule\ \ required username="****" password="****"; spring.kafka.producer.properties.enable.idempotence=false ssl.endpoint.identification.algorithm= spring.kafka.properties.ssl.truststore.type=JKS spring.kafka.properties.ssl.truststore.location=C://Users//Public//Projects//kafka.jks spring.kafka.properties.ssl.truststore.password=**** #Kafka serializers and deserializers spring.kafka.consumer.key-deserializer= org.apache.kafka.common.serialization.StringDeserializer spring.kafka.consumer.value-deserializer= org.springframework.kafka.support.serializer.JsonDeserializer spring.kafka.producer.key-serializer= org.apache.kafka.common.serialization.StringSerializer spring.kafka.producer.value-serializer= org.springframework.kafka.support.serializer.JsonSerializer #Kafka topic and group configurations request.topic=req_topic response.topic=resp_topic consumer.group.id=consumer_group_id spring.kafka.consumer.properties.spring.json.trusted.packages=*

嘗試了以下屬性,但仍然面臨同樣的問題

#Kafka consumer properties #spring.kafka.consumer.properties[request.timeout.ms]=300000 #spring.kafka.consumer.properties[heartbeat.interval.ms]=1000 #spring.kafka.consumer.properties[max.poll.interval.ms]=900000 #spring.kafka.consumer.properties[session.timeout.ms]=600000 #spring.kafka.consumer.properties[max.poll.records]=100

這是我的消費者 class

  @KafkaListener(topics = "${request.topic}", groupId = "${consumer.group.id}", topicPartitions = {
        @TopicPartition(topic = "${request.topic}", partitions = "${partition}") })
    @SendTo
    public String consumer (String message)
    {
       
        return message;
    }

這是我的制片人 class

 @Service public class ProducerService { @Autowired private ReplyingKafkaTemplate<String, String, String> replyingKafkaTemplate; @Value("${request.topic}") String requestTopic; public Object getResponse (Message message) { try { ProducerRecord<String, String> record = new ProducerRecord<>(requestTopic, message); RequestReplyFuture<String, String, String> replyFuture = replyingKafkaTemplate.sendAndReceive(record); ConsumerRecord<String, String> consumerRecord = replyFuture.get(10, TimeUnit.SECONDS); return consumerRecord.value(); } catch (ExecutionException e) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(e.getMessage()); } catch (InterruptedException e) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(e.getMessage()); } catch (TimeoutException e) { return ResponseEntity.status(HttpStatus.REQUEST_TIMEOUT).body(e.getMessage()); } catch (Exception e) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Un expected error occured please try again later. "); } }

}

應用程序屬性

spring.kafka.consumer.key-deserializer= org.apache.kafka.common.serialization.StringDeserializer #spring.kafka.consumer.value-deserializer= org.springframework.kafka.support.serializer.JsonDeserializer spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.ErrorHandlingDeserializer spring.kafka.consumer.properties.spring.deserializer.value.delegate.class=org.springframework.kafka.support.serializer.JsonDeserializer spring.kafka.producer.key-serializer= org.apache.kafka.common.serialization.StringSerializer spring.kafka.producer.value-serializer= org.springframework.kafka.support.serializer.JsonSerializer spring.kafka.consumer.properties.spring.json.trusted.packages=* #spring.kafka.consumer.key-deserializer= org.apache.kafka.common.serialization.StringDeserializer #spring.kafka.consumer.value-deserializer= org.springframework.kafka.support.serializer.JsonDeserializer #spring.kafka.producer.key-serializer= org.apache.kafka.common.serialization.StringSerializer #spring.kafka.producer.value-serializer= org.springframework.kafka.support.serializer.JsonSerializer #spring.kafka.consumer.properties.spring.json.trusted.packages=* #spring.kafka.consumer.auto-offset-reset=earliest #spring.kafka.consumer.properties.max.poll.interval.ms=5000000 #spring.kafka.consumer.properties[request.timeout.ms]=300000 #spring.kafka.consumer.properties[heartbeat.interval.ms]=1000 #spring.kafka.consumer.properties[max.poll.interval.ms]=900000 #spring.kafka.consumer.properties[session.timeout.ms]=600000 #Default properties. #request.timeout.ms=30000 #heartbeat.interval.ms=3000 #max.poll.interval.ms=300000 #max.poll.records=500 #session.timeout.ms=45000 #Updated properties. spring.kafka.consumer.properties.request.timeout.ms=300000 spring.kafka.consumer.properties.heartbeat.interval.ms=1000 spring.kafka.consumer.properties.max.poll.interval.ms=600000 spring.kafka.consumer.properties.max.poll.records=100 spring.kafka.consumer.properties.session.timeout.ms=600000 spring.main.allow-circular-references=true spring.kafka.consumer.enable-auto-commit=false spring.kafka.consumer.auto-offset-reset=earliest

錯誤信息非常清楚。

您的聽眾處理投票返回的記錄的時間過長。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM