[英]org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition - keep appearing
we are using spring, kafka(spring kafka 2.4.5, spring cloud stream 3.0.1), openshift.我们正在使用 spring,kafka(spring kafka 2.4.5,spring 云 stream 3.0.1) we have the below configuration.
我们有以下配置。 Multiple broker/topic with each topic has 8 partitions with replication factor as 3, multiple spring boot consumer.
每个主题的多个代理/主题有 8 个分区,复制因子为 3,多个 spring 引导消费者。
we are getting the below exception when we bring down one of broker as part of resiliency testing and even after we bring up the server, still getting the same error.当我们关闭一个代理作为弹性测试的一部分时,我们得到以下异常,即使在我们启动服务器之后,仍然得到相同的错误。
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
2020-05-19 18:39:57.598 ERROR [service,,,] 1 --- [ad | producer-5] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='{49, 50, 49, 50, 54, 53, 56}' and payload='{123, 34, 115, 111, 117, 114, 99, 101, 34, 58, 34, 72, 67, 80, 77, 34, 44, 34, 110, 97, 109, 101, 34...' to topic topicname
2020-05-19 18:39:57.598 WARN [service,,,] 1 --- [ad | producer-5] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-5] Received invalid metadata error in produce request on partition topicname-4 due to org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.. Going to request metadata update now
I checked google and most says that change the retry value to more than 1 will work but since the error is coming even after broker up, i am not sure whether it works or not.我检查了谷歌,大多数人说将重试值更改为大于 1 会起作用,但由于即使在代理启动后也会出现错误,我不确定它是否有效。
This is what i have in properties file:这是我在属性文件中的内容:
spring.cloud.stream.kafka.binder.brokers=${output.server}
spring.cloud.stream.kafka.binder.requiredAcks=1
spring.cloud.stream.bindings.outputChannel.destination=${output.topic}
spring.cloud.stream.bindings.outputChannel.content-type=application/json
and one line of code to send messages using kafka streams API.以及使用 kafka 流 API 发送消息的一行代码。
`client.outputChannel().send(MessageBuilder.withPayload(message).setHeader(KafkaHeaders.MESSAGE_KEY, message.getId().getBytes()).build());`
please help me.请帮我。
Thanks Rams谢谢公羊
When bring down one of the brokers, the metadata update for the producer might not have been updated yet.当关闭其中一个代理时,生产者的元数据更新可能尚未更新。 So when it tries to send data for partitions in that broker, it will fail, then request a metadata update and try again for the right broker which is now the new leader.
因此,当它尝试为该代理中的分区发送数据时,它将失败,然后请求元数据更新并再次尝试正确的代理,即现在的新领导者。
When the broker is added back to the cluster again, the Controller will trigger a rebalance for the topic partitions.当代理再次添加回集群时,Controller 将触发主题分区的重新平衡。 So producers which have not yet been updated with the this new metadata will also show up failed operations when trying to send data to brokers where partition leaders have changed.
因此,尚未使用此新元数据更新的生产者在尝试将数据发送到分区领导已更改的代理时也会显示失败的操作。 This should not happen after the new metadata update on the next retries.
在下次重试新的元数据更新后,这不应该发生。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.