简体   繁体   中英

Unable to start kafka with zookeeper (kafka.common.InconsistentClusterIdException)

Below the steps I did to get this issue:

  1. Launch ZooKeeper
  2. Launch Kafka: .\bin\windows\kafka-server-start.bat.\config\server.properties

And at the second step the error happens:

ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) kafka.common.InconsistentClusterIdException: The Cluster ID Reu8ClK3TTywPiNLIQIm1w doesn't match stored clusterId Some(BaPSk1bCSsKFxQQ4717R6Q) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong. at kafka.server.KafkaServer.startup(KafkaServer.scala:220) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) at kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala)

When I trigger .\bin\windows\kafka-server-start.bat.\config\server.properties zookeeper console returns:

INFO [SyncThread:0:FileTxnLog@216] - Creating new log file: log.1

How to fix this issue to get kafka running?

Edit You can access to the proper issue on the right site (serverfault) here

Edit Here is the Answer

I managed to Solve this issue with the following steps :

  1. Just Delete all the log/Data file created (or generated) into zookeeper and kafka.
  2. Run Zookeper
  3. Run Kafka

[Since this post is open again I post my answer there so you got all on the same post]

** Reason is Kafka saved failed cluster ID in meta.properties.**

Try to delete kafka-logs/meta.properties from your tmp folder, which is located in C:/tmp folder by default on windows, and /tmp/kafka-logs on Linux

if kafka is running in docker containers, the log path may be specified by volume config in the docker-compose - see docs.docker.com/compose/compose-file/compose-file-v2/#volumes -- Chris Halcrow

** How to find Kafka log path:**

Open server server.properties file which is located in your kafka folder kafka_2.11-2.4.0\\config\\server.properties (considering your version of kafka, folder name could be kafka_<kafka_version>):

Then search for entry log.dirs to check where logs locate log.dirs=/tmp/kafka-logs

For mac, the following steps are needed.

  • Stop kafka service: brew services stop kafka
  • open kafka server.properties file: vim /usr/local/etc/kafka/server.properties
  • find value of log.dirs in this file. For me, it is /usr/local/var/lib/kafka-logs
  • delete path-to-log.dirs/meta.properties file
  • start kafka service brew services start kafka

If you use Embedded Kafka with Testcontainers in your Java project like myself, then simply delete your build/kafka folder and Bob's your uncle.

The mentioned meta.properties can be found under build/kafka/out/embedded-kafka .

I deleted the following directories :-

a.) logs directory from kafka-server's configured location ie log.dir property path.

b.) tmp directory from kafka broker's location.

log.dirs=../tmp/kafka-logs-1

I had some old volumes lingering around. I checked the volumes like this:

docker volume list

And pruned old volumes:

 docker volume prune

And also removed the ones that were kafka: example:

docker volume rm test_kafka

I was using docker-compose to re-set up Kafka on a Linux server, with a known, working docker-compose.config that sets up a number of Kafka components (broker, zookeeper, connect, rest proxy), and I was getting the issue described in the OP. I fixed this for my dev server instance by doing the following

  • docker-compose down
  • backup kafka-logs directory using cp kafka-logs -r kafka-logs-bak
  • delete the kafka-logs/meta.properties file
  • docker-compose up -d

Note for users of docker-compose :

My log files weren't in the default location ( /tmp/kafka-logs ). If you're running Kafka in Docker containers, the log path can be specified by volume config in the docker-compose eg

volumes:
      - ./kafka-logs:/tmp/kafka-logs

This is specifying SOURCE:TARGET. ./kafka-logs is the source (ie a directory named kafka-logs , in the same directory as the docker-compose file). This is then targeted to /tmp/kafka-logs as the mounted volume within the kafka container ). So the logs can either be deleted from the source folder on the host machine, or by deleting them from the mounted volume after doing a docker exec into the kafka container.

see https://docs.docker.com/compose/compose-file/compose-file-v2/#volumes

No need to delete the log/data files on Kafka. Check the Kafka error logs and find the new cluster id. Update the meta.properties file with cluster-ID then restart the Kafka.

/home/kafka/logs/meta.properties

To resolve this issue permanently follow below.

Check your zookeeper.properties file and look for dataDir path and change the path tmp location to any other location which should not be removed after server restart.

/home/kafka/kafka/config/zookeeper.properties

Copy the zookeeper folder and file to the new(below or non tmp) location then restart the zookeeper and Kafka.

cp -r /tmp/zookeeper /home/kafka/zookeeper

Now server restart won't affect the Kafka startup.

对我来说,meta.properties 在 /usr/local/var/lib/kafka-logs 中。通过删除它,kafka 开始工作。

I also deleted all the content of the folder containing all data generated by Kafka. I could find the folder in my .yml file:

 kafka:
    image: confluentinc/cp-kafka:7.0.0
    ports:
      - '9092:9092'
    environment:
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "true"
    volumes:
      - ./kafka-data/data:/var/lib/kafka/data
    depends_on:
      - zookeeper
    networks:
      - default

Under volumes: stays the location. So, in my case I deleted all files of the data folder located under kafka-data .

I've tried deleting the meta.properties file but didn't work.

In my case, it's solved by deleting legacy docker images.

But the problem with this is that deletes all previous data. So be careful if you want to keep the old data this is not the right solution for you.

docker rm $(docker ps -q -f 'status=exited')
docker rmi $(docker images -q -f "dangling=true")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM