简体   繁体   English

org.apache.kafka.common.errors.NotLeaderForPartitionException:此服务器不是该主题分区的领导者 - 继续出现

[英]org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition - keep appearing

we are using spring, kafka(spring kafka 2.4.5, spring cloud stream 3.0.1), openshift.我们正在使用 spring,kafka(spring kafka 2.4.5,spring 云 stream 3.0.1) we have the below configuration.我们有以下配置。 Multiple broker/topic with each topic has 8 partitions with replication factor as 3, multiple spring boot consumer.每个主题的多个代理/主题有 8 个分区,复制因子为 3,多个 spring 引导消费者。

we are getting the below exception when we bring down one of broker as part of resiliency testing and even after we bring up the server, still getting the same error.当我们关闭一个代理作为弹性测试的一部分时,我们得到以下异常,即使在我们启动服务器之后,仍然得到相同的错误。

org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.

2020-05-19 18:39:57.598 ERROR [service,,,] 1 --- [ad | producer-5] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='{49, 50, 49, 50, 54, 53, 56}' and payload='{123, 34, 115, 111, 117, 114, 99, 101, 34, 58, 34, 72, 67, 80, 77, 34, 44, 34, 110, 97, 109, 101, 34...' to topic topicname
2020-05-19 18:39:57.598  WARN [service,,,] 1 --- [ad | producer-5] o.a.k.clients.producer.internals.Sender  : [Producer clientId=producer-5] Received invalid metadata error in produce request on partition topicname-4 due to org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.. Going to request metadata update now

I checked google and most says that change the retry value to more than 1 will work but since the error is coming even after broker up, i am not sure whether it works or not.我检查了谷歌,大多数人说将重试值更改为大于 1 会起作用,但由于即使在代理启动后也会出现错误,我不确定它是否有效。

This is what i have in properties file:这是我在属性文件中的内容:

spring.cloud.stream.kafka.binder.brokers=${output.server}
spring.cloud.stream.kafka.binder.requiredAcks=1
spring.cloud.stream.bindings.outputChannel.destination=${output.topic}
spring.cloud.stream.bindings.outputChannel.content-type=application/json

and one line of code to send messages using kafka streams API.以及使用 kafka 流 API 发送消息的一行代码。

`client.outputChannel().send(MessageBuilder.withPayload(message).setHeader(KafkaHeaders.MESSAGE_KEY, message.getId().getBytes()).build());`

please help me.请帮我。

Thanks Rams谢谢公羊

Broker down经纪人倒闭

When bring down one of the brokers, the metadata update for the producer might not have been updated yet.当关闭其中一个代理时,生产者的元数据更新可能尚未更新。 So when it tries to send data for partitions in that broker, it will fail, then request a metadata update and try again for the right broker which is now the new leader.因此,当它尝试为该代理中的分区发送数据时,它将失败,然后请求元数据更新并再次尝试正确的代理,即现在的新领导者。

Broker restart代理重启

When the broker is added back to the cluster again, the Controller will trigger a rebalance for the topic partitions.当代理再次添加回集群时,Controller 将触发主题分区的重新平衡。 So producers which have not yet been updated with the this new metadata will also show up failed operations when trying to send data to brokers where partition leaders have changed.因此,尚未使用此新元数据更新的生产者在尝试将数据发送到分区领导已更改的代理时也会显示失败的操作。 This should not happen after the new metadata update on the next retries.在下次重试新的元数据更新后,这不应该发生。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 春季卡夫卡面临org.apache.kafka.common.errors.InvalidPidMappingException - Facing org.apache.kafka.common.errors.InvalidPidMappingException in spring kafka org.apache.kafka.common.errors.RecordTooLargeException:该请求包含的消息大于服务器将接受的最大消息大小 - org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept Azure eventthub Kafka org.apache.kafka.common.errors.TimeoutException 对于一些记录 - Azure eventhub Kafka org.apache.kafka.common.errors.TimeoutException for some of the records Spring Kafka-无法产生消息并且事务回滚不起作用-org.apache.kafka.common.errors.ProducerFencedException - spring kafka - Failed to produce message and Transaction Rollback not working - org.apache.kafka.common.errors.ProducerFencedException org.apache.kafka.common.errors.InconsistentGroupProtocolException: 组成员支持的协议与现有协议不兼容 - org.apache.kafka.common.errors.InconsistentGroupProtocolException: The group member's supported protocols are incompatible with those of existing org.apache.kafka.common.KafkaException:SaleRequestFactory 类不是 org.apache.kafka.common.serialization.Serializer 的实例 - org.apache.kafka.common.KafkaException: class SaleRequestFactory is not an instance of org.apache.kafka.common.serialization.Serializer org.apache.kafka.common.KafkaException: 构建kafka消费者失败 - org.apache.kafka.common.KafkaException: Failed to construct kafka consumer Spring 启动和 Apache Kafka 通过 Docker 撰写抛出 LEADER_NOT_AVAILABLE 的主题并发送失败错误 - Spring boot and Apache Kafka via Docker compose throw LEADER_NOT_AVAILABLE of topic and Send failed error 发送 avro 消息时出现异常,异常是 org.apache.kafka.common.errors.SerializationException:注册 Avro 架构时出错: - Getting exception while sending avro message, exception is org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: KafkaException:class 不是 org.apache.kafka.common.serialization.Deserializer 的实例 - KafkaException: class is not an instance of org.apache.kafka.common.serialization.Deserializer
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM