简体   繁体   English

为什么 kafka 集群会出现错误“活动代理的数量 '0' 不符合所需的复制因子”?

[英]Why kafka cluster did error "Number of alive brokers '0' does not meet the required replication factor"?

I have 2 kafka brokers and 1 zookeeper.我有 2 个卡夫卡经纪人和 1 个动物园管理员。 Brokers config: server.properties file: 1 broker:经纪人配置:server.properties 文件:1 个经纪人:

auto.create.topics.enable=true
broker.id=1
delete.topic.enable=true
group.initial.rebalance.delay.ms=0
listeners=PLAINTEXT://5.1.2.3:9092
log.dirs=/opt/kafka_2.12-2.1.0/logs
log.retention.check.interval.ms=300000
log.retention.hours=168
log.segment.bytes=1073741824
max.message.bytes=105906176
message.max.bytes=105906176
num.io.threads=8
num.network.threads=3
num.partitions=10
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
replica.fetch.max.bytes=105906176
socket.receive.buffer.bytes=102400
socket.request.max.bytes=105906176
socket.send.buffer.bytes=102400
transaction.state.log.min.isr=1
transaction.state.log.replication.factor=1
zookeeper.connect=5.1.3.6:2181
zookeeper.connection.timeout.ms=6000

2 broker: 2 经纪人:

auto.create.topics.enable=true
broker.id=2
delete.topic.enable=true
group.initial.rebalance.delay.ms=0
listeners=PLAINTEXT://18.4.6.6:9092
log.dirs=/opt/kafka_2.12-2.1.0/logs
log.retention.check.interval.ms=300000
log.retention.hours=168
log.segment.bytes=1073741824
max.message.bytes=105906176
message.max.bytes=105906176
num.io.threads=8
num.network.threads=3
num.partitions=10
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
replica.fetch.max.bytes=105906176
socket.receive.buffer.bytes=102400
socket.request.max.bytes=105906176
socket.send.buffer.bytes=102400
transaction.state.log.min.isr=1
transaction.state.log.replication.factor=1
zookeeper.connect=5.1.3.6:2181
zookeeper.connection.timeout.ms=6000

if i ask zookeeper like this:如果我这样问动物园管理员:

echo dump |回声转储 | nc zook_IP 2181数控 Zook_IP 2181

i got:我有:

SessionTracker dump:
Session Sets (3):
0 expire at Sun Jan 04 03:40:27 MSK 1970:
1 expire at Sun Jan 04 03:40:30 MSK 1970:
        0x1000bef9152000b
1 expire at Sun Jan 04 03:40:33 MSK 1970:
        0x1000147d4b40003
ephemeral nodes dump:
Sessions with Ephemerals (2):
0x1000147d4b40003:
        /controller
        /brokers/ids/2
0x1000bef9152000b:
        /brokers/ids/1

looke fine, but not works:(. Zookeeper see 2 brokers, but in first kafka broker we have error:看起来不错,但不起作用:(。Zookeeper 看到 2 个代理,但在第一个 kafka 代理中我们有错误:

 ERROR [KafkaApi-1] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)

also we use kafka_exporter for prometheus, and he log this error:我们也将 kafka_exporter 用于普罗米修斯,他记录了这个错误:

Cannot get oldest offset of topic Some.TOPIC partition 9: kafka server: Request was for a topic or partition that does not exist on this broker." source="kafka_exporter.go:296

pls help?请帮忙? were i mistake in config我是不是配置错了

Are your clocks working?你的时钟在工作吗? Zookeeper thinks it's 1970动物园管理员认为是 1970 年

Sun Jan 04 03:40:27 MSK 1970

You may want to look at the rest of the logs or see if Kafka and Zookeeper are actively running and ports are open.您可能想查看日志的 rest 或查看 Kafka 和 Zookeeper 是否正在积极运行并且端口是否打开。

In your first message, after starting a fresh cluster you see this, so it's not a true error在您的第一条消息中,在启动一个新集群后您会看到这一点,所以这不是一个真正的错误

This error can be ignored if the cluster is starting up and not all brokers are up yet.如果集群正在启动并且并非所有代理都已启动,则可以忽略此错误。 (kafka.server.KafkaApis) (kafka.server.KafkaApis)

The properties you show, though, have listeners on entirely different subnets and you're not using advertised.listeners但是,您显示的属性在完全不同的子网上有侦听器,并且您没有使用advertised.listeners

Kafka broker.id changes maybe cause this problem. Kafka broker.id 更改可能会导致此问题。 Clean up the kafka metadata under zk, note: kafka data will be lost清理zk下的kafka元数据,注意:kafka数据会丢失

I got this error message in this situation:在这种情况下,我收到此错误消息:

  • Cluster talking in SSL SSL 中的集群通话
  • Every broker is a container每个经纪人都是一个容器
  • Updated the certificate with new password inside ALL brokers在所有代理中使用新密码更新了证书
  • Rolling update滚动更新

After the first broker reboot, it spammed this error message and the broker controller talked about "a new broker connected but password verification failed".在第一次代理重新启动后,它会发送此错误消息,并且代理 controller 谈到“新代理已连接但密码验证失败”。

Solutions:解决方案:

  1. Set the new certificate password with the old password用旧密码设置新证书密码
  2. Down then Up your entire cluster at once一次向下然后向上整个集群
  3. (not tested yet) Change the certificate on one broker, reboot it then move to the next broker until you reboot all of them (ending with the controller) (尚未测试)更改一个代理上的证书,重新启动它然后移动到下一个代理,直到您重新启动所有代理(以控制器结束)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Kafka Kubernetes:活动代理的数量“0”不满足偏移主题所需的复制因子“1” - Kafka Kubernetes: Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic 5个代理中kafka主题的最佳分区数,在1个集群中复制因子= 3 - Optimal number of partition for kafka topic on 5 brokers with replication factor=3 in 1 cluster 创建Kafka主题时出错: - 复制因子大于可用的代理 - Error creating Kafka topic :- replication factor larger than available brokers 在 Kafka HA 中,为什么需要的最小代理数是 3 而不是 2 - In Kafka HA, why minimum number of brokers required are 3 and not 2 Kafka 代理从集群中移除且重新分配失败后降低主题复制因子 - Decrease topic replication factor after Kafka brokers removed from cluster and failed reassignments 错误:复制因子:比可用代理大1:当我创建Kafka主题时,它为0 - Error: Replication factor: 1 larger than available brokers: 0, when I create a Kafka topic 创建 Kafka 主题时出错 - 复制因子大于可用代理 - Error creating Kafka Topics- Replication factor larger than available brokers 无论 kafka 中的复制因子如何,生成的消息是否会被复制到所有代理 - Would the produced message be copied to all brokers irrespective of the replication factor in kafka kafka 连接异常,复制因子:3 大于可用代理:1 - kafka connect exception, Replication factor: 3 larger than available brokers: 1 复制因子:比可用代理大 3:启动 kafka 时为 1 - Replication factor: 3 larger than available brokers: 1 when starting the kafka
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM