简体   繁体   English

无法使用 zookeeper 启动 kafka (kafka.common.InconsistentClusterIdException)

[英]Unable to start kafka with zookeeper (kafka.common.InconsistentClusterIdException)

Below the steps I did to get this issue:下面是我为解决此问题所做的步骤:

  1. Launch ZooKeeper启动 ZooKeeper
  2. Launch Kafka: .\bin\windows\kafka-server-start.bat.\config\server.properties启动 Kafka: .\bin\windows\kafka-server-start.bat.\config\server.properties

And at the second step the error happens:在第二步发生错误:

ERROR Fatal error during KafkaServer startup. ERROR KafkaServer 启动期间的致命错误。 Prepare to shutdown (kafka.server.KafkaServer) kafka.common.InconsistentClusterIdException: The Cluster ID Reu8ClK3TTywPiNLIQIm1w doesn't match stored clusterId Some(BaPSk1bCSsKFxQQ4717R6Q) in meta.properties.准备关闭 (kafka.server.KafkaServer) kafka.common.InconsistentClusterIdException: The Cluster ID Reu8ClK3TTywPiNLIQIm1w doesn't match stored clusterId Some(BaPSk1bCSsKFxQQ4717R6Q) in meta.properties。 The broker is trying to join the wrong cluster.代理正在尝试加入错误的集群。 Configured zookeeper.connect may be wrong.配置的 zookeeper.connect 可能是错误的。 at kafka.server.KafkaServer.startup(KafkaServer.scala:220) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) at kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala)在 kafka.server.KafkaServer.startup(KafkaServer.scala:220) 在 kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) 在 kafka.Kafka$.main(Kafka.scala:84) 在 kafka.Kafka.main (卡夫卡.scala)

When I trigger .\bin\windows\kafka-server-start.bat.\config\server.properties zookeeper console returns:当我触发.\bin\windows\kafka-server-start.bat.\config\server.properties zookeeper控制台返回:

INFO [SyncThread:0:FileTxnLog@216] - Creating new log file: log.1 INFO [SyncThread:0:FileTxnLog@216] - 创建新的日志文件:log.1

How to fix this issue to get kafka running?如何解决此问题以使 kafka 运行?

Edit You can access to the proper issue on the right site (serverfault) here编辑您可以在此处访问正确站点(serverfault)上的正确问题

Edit Here is the Answer编辑这里是答案

I managed to Solve this issue with the following steps :我设法通过以下步骤解决了这个问题:

  1. Just Delete all the log/Data file created (or generated) into zookeeper and kafka.只需将所有创建(或生成)的日志/数据文件删除到 zookeeper 和 kafka 中。
  2. Run Zookeper运行动物园管理员
  3. Run Kafka运行卡夫卡

[Since this post is open again I post my answer there so you got all on the same post] [由于这个帖子再次开放,我在那里发布了我的答案,所以你们都在同一个帖子上]

** Reason is Kafka saved failed cluster ID in meta.properties.** ** 原因是 Kafka 在 meta.properties 中保存了失败的集群 ID。**

Try to delete kafka-logs/meta.properties from your tmp folder, which is located in C:/tmp folder by default on windows, and /tmp/kafka-logs on Linux尝试从您的 tmp 文件夹中删除 kafka-logs/meta.properties,该文件夹在 Windows 上默认位于 C:/tmp 文件夹中,在 Linux 上默认位于 /tmp/kafka-logs

if kafka is running in docker containers, the log path may be specified by volume config in the docker-compose - see docs.docker.com/compose/compose-file/compose-file-v2/#volumes -- Chris Halcrow如果 kafka 在 docker 容器中运行,日志路径可能由docker -compose 中的卷配置指定 - 请参阅docs.docker.com/compose/compose-file/compose-file-v2/#volumes -- Chris Halcrow

** How to find Kafka log path:** ** 如何查找 Kafka 日志路径:**

Open server server.properties file which is located in your kafka folder kafka_2.11-2.4.0\\config\\server.properties (considering your version of kafka, folder name could be kafka_<kafka_version>):打开位于 kafka 文件夹 kafka_2.11-2.4.0\\config\\server.properties 中的服务器 server.properties 文件(考虑到您的 kafka 版本,文件夹名称可能是 kafka_<kafka_version>):

Then search for entry log.dirs to check where logs locate log.dirs=/tmp/kafka-logs然后搜索条目 log.dirs 以查看日志所在的位置log.dirs=/tmp/kafka-logs

For mac, the following steps are needed.对于mac,需要以下步骤。

  • Stop kafka service: brew services stop kafka停止 kafka 服务: brew services stop kafka
  • open kafka server.properties file: vim /usr/local/etc/kafka/server.properties打开 kafka server.properties文件: vim /usr/local/etc/kafka/server.properties
  • find value of log.dirs in this file.在此文件中查找log.dirs 的值。 For me, it is /usr/local/var/lib/kafka-logs对我来说,它是/usr/local/var/lib/kafka-logs
  • delete path-to-log.dirs/meta.properties file删除path-to-log.dirs/meta.properties文件
  • start kafka service brew services start kafka启动 kafka 服务brew services start kafka

If you use Embedded Kafka with Testcontainers in your Java project like myself, then simply delete your build/kafka folder and Bob's your uncle.如果您像我一样在 Java 项目中使用嵌入式 Kafka 和 Testcontainers,那么只需删除您的build/kafka文件夹和 Bob 是您的叔叔。

The mentioned meta.properties can be found under build/kafka/out/embedded-kafka .提到的meta.properties可以在build/kafka/out/embedded-kafka

I deleted the following directories :-我删除了以下目录:-

a.) logs directory from kafka-server's configured location ie log.dir property path. a.) 从 kafka-server 的配置位置记录目录,即 log.dir 属性路径。

b.) tmp directory from kafka broker's location. b.) 来自 kafka 代理位置的 tmp 目录。

log.dirs=../tmp/kafka-logs-1

I had some old volumes lingering around.我有一些旧书挥之不去。 I checked the volumes like this:我检查了这样的卷:

docker volume list

And pruned old volumes:并修剪旧卷:

 docker volume prune

And also removed the ones that were kafka: example:并且还删除了 kafka 的那些:例如:

docker volume rm test_kafka

I was using docker-compose to re-set up Kafka on a Linux server, with a known, working docker-compose.config that sets up a number of Kafka components (broker, zookeeper, connect, rest proxy), and I was getting the issue described in the OP.我正在使用docker-compose在 Linux 服务器上重新设置 Kafka,使用已知的、有效的 docker-compose.config 设置了许多 Kafka 组件(代理、zookeeper、连接、休息代理),我得到了OP中描述的问题。 I fixed this for my dev server instance by doing the following我通过执行以下操作为我的开发服务器实例修复了这个问题

  • docker-compose down
  • backup kafka-logs directory using cp kafka-logs -r kafka-logs-bak使用cp kafka-logs -r kafka-logs-bak备份kafka-logs目录
  • delete the kafka-logs/meta.properties file删除kafka-logs/meta.properties文件
  • docker-compose up -d

Note for users of docker-compose : docker-compose 用户注意事项

My log files weren't in the default location ( /tmp/kafka-logs ).我的日志文件不在默认位置 ( /tmp/kafka-logs )。 If you're running Kafka in Docker containers, the log path can be specified by volume config in the docker-compose eg如果您在 Docker 容器中运行 Kafka,则可以通过 docker-compose 中的卷配置指定日志路径,例如

volumes:
      - ./kafka-logs:/tmp/kafka-logs

This is specifying SOURCE:TARGET.这是指定 SOURCE:TARGET。 ./kafka-logs is the source (ie a directory named kafka-logs , in the same directory as the docker-compose file). ./kafka-logs是源(即名为kafka-logs的目录,与 docker-compose 文件在同一目录中)。 This is then targeted to /tmp/kafka-logs as the mounted volume within the kafka container ).然后将其定位到/tmp/kafka-logs作为kafka 容器内的挂载卷)。 So the logs can either be deleted from the source folder on the host machine, or by deleting them from the mounted volume after doing a docker exec into the kafka container.因此,可以从主机上的源文件夹中删除日志,也可以在对 kafka 容器执行docker exec后从挂载的卷中删除它们。

see https://docs.docker.com/compose/compose-file/compose-file-v2/#volumeshttps://docs.docker.com/compose/compose-file/compose-file-v2/#volumes

No need to delete the log/data files on Kafka.无需删除 Kafka 上的日志/数据文件。 Check the Kafka error logs and find the new cluster id.检查 Kafka 错误日志并找到新的集群 ID。 Update the meta.properties file with cluster-ID then restart the Kafka.使用 cluster-ID 更新 meta.properties 文件,然后重新启动 Kafka。

/home/kafka/logs/meta.properties

To resolve this issue permanently follow below.要永久解决此问题,请遵循以下步骤。

Check your zookeeper.properties file and look for dataDir path and change the path tmp location to any other location which should not be removed after server restart.检查您的 zookeeper.properties 文件并查找dataDir路径并将路径tmp位置更改为服务器重启后不应删除的任何其他位置。

/home/kafka/kafka/config/zookeeper.properties

Copy the zookeeper folder and file to the new(below or non tmp) location then restart the zookeeper and Kafka.将zookeeper文件夹和文件复制到新的(低于或非tmp)位置,然后重新启动zookeeper和Kafka。

cp -r /tmp/zookeeper /home/kafka/zookeeper

Now server restart won't affect the Kafka startup.现在服务器重启不会影响 Kafka 启动。

对我来说,meta.properties 在 /usr/local/var/lib/kafka-logs 中。通过删除它,kafka 开始工作。

I also deleted all the content of the folder containing all data generated by Kafka.我还删除了包含 Kafka 生成的所有数据的文件夹的所有内容。 I could find the folder in my .yml file:我可以在我的.yml文件中找到该文件夹:

 kafka:
    image: confluentinc/cp-kafka:7.0.0
    ports:
      - '9092:9092'
    environment:
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "true"
    volumes:
      - ./kafka-data/data:/var/lib/kafka/data
    depends_on:
      - zookeeper
    networks:
      - default

Under volumes: stays the location.volumes:停留位置。 So, in my case I deleted all files of the data folder located under kafka-data .因此,就我而言,我删除了位于kafka-data下的data文件夹的所有文件。

I've tried deleting the meta.properties file but didn't work.我试过删除 meta.properties 文件但没有用。

In my case, it's solved by deleting legacy docker images.在我的例子中,它通过删除遗留 docker 图像来解决。

But the problem with this is that deletes all previous data.但这样做的问题是删除所有以前的数据。 So be careful if you want to keep the old data this is not the right solution for you.因此,如果您想保留旧数据,请小心这不是适合您的解决方案。

docker rm $(docker ps -q -f 'status=exited')
docker rmi $(docker images -q -f "dangling=true")

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 带URI的Flink,Kafka和Zookeeper - Flink, Kafka and Zookeeper with an URI zkClient无法序列化,sparkstreaming将kafka偏移量写入zookeeper - zkClient can not be Serializabled , sparkstreaming write kafka offset to zookeeper 无法在Intellij中创建Kafka Producer对象 - Unable to create Kafka Producer Object in Intellij 在 Spark 中,无法使用来自 Kafka 主题的数据 - In Spark, Unable to consume data from Kafka Topic Kafka - 关闭(kafka.server.KafkaServer),启动Kafka-Server-Start的问题 - Kafka - shutting down (kafka.server.KafkaServer), problems with starting Kafka-Server-Start 带有akka kafka kerberos配置的kafka用户端无法接收Kafka消息 - Unable To Receive Kafka Messages at kafka consumer side with akka kafka kerberos configuration Kafka-处理无法处理消息的消费者的模式 - Kafka - patterns for handling consumer unable to process messages kafka.api.OffsetRequest-无法检索结果 - kafka.api.OffsetRequest - unable to retrieve results Cloudflow 无法从 kafka 读取 avro 消息 - Cloudflow is unable to read avro message from kafka Kafka错误:SLF4J:对[org.apache.kafka.common.Cluster]类型的对象进行toString()调用失败 - Kafka error: SLF4J: Failed toString() invocation on an object of type [org.apache.kafka.common.Cluster]
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM