简体   繁体   中英

Docker compose create kafka topics

Problem : Cannot create topics from docker-compose. I need to create kafka topics before I run a system under test. Planning to use it as a part of the pipeline, hence using UI is not an option.

Note: it takes ~15 seconds for kafka to be ready so I would need to put a sleep for 15 seconds prior to adding the topics.

Possible solution :

  1. create a shell.sh file with commands to wait for 15 sec then add a bunch of topics
  2. create a dockerfile for it
  3. include that docker image in the docker-compose.yml just before starting the system under test

Current flow :

  1. create zookeeper - OK
  2. create kafka1 - OK
  3. rest-proxy - OK
  4. create topics <- PROBLEM
  5. create SUT - OK

Current docker-compose.yml :

version: '2'
services:
zookeeper:
image: docker.io/confluentinc/cp-zookeeper:5.4.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000

Kafka1:
image: docker.io/confluentinc/cp-enterprise-kafka:5.4.1
hostname: Kafka1
container_name: Kafka1
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_HOST_NAME: Kafka1
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://Kafka1:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: Kafka1:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'

rest-proxy:
image: docker.io/confluentinc/cp-kafka-rest:5.4.1
depends_on:
- zookeeper
- Kafka1
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'Kafka1:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'


topics:
image: topics:latest
hostname: topics
container_name: topics
depends_on:
- zookeeper
- Kafka1
- rest-proxy


sut:
image: sut:latest
hostname: sut
container_name: sut
depends_on:
- zookeeper
- Kafka1
- rest-proxy
ports:
- 5000:80

Current Dockerfile for topics container :

FROM ubuntu:14.04

ADD topics.sh /usr/local/bin/topics.sh

RUN chmod +x /usr/local/bin/topics.sh

CMD /usr/local/bin/topics.sh

Current topics.sh file :

#!/bin/sh
echo "Start: Sleep 15 seconds"
sleep 30;
wait;
echo "Begin creating topics"
docker exec Kafka1 kafka-topics --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic MY_AWESOME_TOPIC_ONE
docker exec Kafka1 kafka-topics --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic MY_AWESOME_TOPIC_TWO
echo "Done creating topics"

Current output :

/usr/local/bin/topics.sh: 1: /usr/local/bin/topics.sh: #!/bin/sh: not found
Start: Sleep 15 seconds
Begin creating topics
/usr/local/bin/topics.sh: 8: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 9: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 10: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 11: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 12: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 13: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 14: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 15: /usr/local/bin/topics.sh: docker: not found
Done creating topics

Topics are not created. I'm stuck. Please help.

The simplest way is to start a separate container inside the docker-compose file (called init-kafka in the example below) to launch the various kafka-topics --create ... commands, while first making it wait for Kafka to be reachble by simply running kafka-topics --list ... .

Like this:

version: '2.1'
services:

  zookeeper:
    image: confluentinc/cp-zookeeper:6.1.1
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181

  # reachable on 9092 from the host and on 29092 from inside docker compose
  kafka:
    image: confluentinc/cp-kafka:6.1.1
    depends_on:
      - zookeeper
    ports:
      - '9092:9092'
    expose:
      - '29092'
    environment:
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: '1'
      KAFKA_MIN_INSYNC_REPLICAS: '1'

  init-kafka:
    image: confluentinc/cp-kafka:6.1.1
    depends_on:
      - kafka
    entrypoint: [ '/bin/sh', '-c' ]
    command: |
      "
      # blocks until kafka is reachable
      kafka-topics --bootstrap-server kafka:29092 --list

      echo -e 'Creating kafka topics'
      kafka-topics --bootstrap-server kafka:29092 --create --if-not-exists --topic my-topic-1 --replication-factor 1 --partitions 1
      kafka-topics --bootstrap-server kafka:29092 --create --if-not-exists --topic my-topic-2 --replication-factor 1 --partitions 1

      echo -e 'Successfully created the following topics:'
      kafka-topics --bootstrap-server kafka:29092 --list
      "

When running it, the init-kafka container should log something like:

docker logs docker_init-kafka_1


[2021-10-12 02:00:28,728] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-12 02:00:28,832] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-12 02:00:29,033] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-12 02:00:29,335] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)


Creating kafka topics
Created topic my-topic-1.
Created topic my-topic-2.
Successfully created the following topics:
my-topic-1
my-topic-2

This solution allows use to create a topic from the docker-compse.yml

  • Refer to the DockerFile of your respective kafka image service

  • Take note of the last command in respective dockerFile image (DockerHub repo of image/Image Layers)

  • In my case for image blockconfluentinc/cp-kafka:latest , the last command that starts the kafka service was "/etc/confluent/docker/run"

  • Hence in your docker-compose.yml include the below command

    command: sh -c "((sleep 15 && kafka-topics --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 3 --topic topicName)&) && /etc/confluent/docker/run "

This will start the kafka service, delay for 15 seconds then create a topic.

Please note that we are assuming it takes 15 seconds for the kafka service to be fully operational

    kafka:
        image: confluentinc/cp-kafka:latest
        depends_on:
            - zookeeper
        ports:
            - "29092:29092"
        environment:
            KAFKA_BROKER_ID: 1
            KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
            KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
            KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
            KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
            KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
        command: sh -c "((sleep 15 && kafka-topics --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 3 --topic quick-starter)&) && /etc/confluent/docker/run ">

Variant 1: Run topic.sh (just the kafka-topics --create in another docker container)

Sorry for providing no full example but let me share the idea:

  1. Create a separate docker container 'kafka-setup' which is just required to get the kafka command-line tools. In that replace the startup command to execute some (good enough) wait operations and runs the /kafka/topic_creator.sh (with host:port-parameter of zookeeper and kafka) which is injected via volume. After that script is executed a file K_OUTPUT_FILE is created and exposed also via volume (prerequisite before calling docker-compose up, the file needs to be deleted).

Snippet from docker-compose.yml (./kafka folder contains the topic creator script, ./output gets the topics created file 'kafka-done.txt'):

  # This "container" is a helper to pre-create topics
  kafka-setup:
    image: confluentinc/cp-kafka:5.4.3
    depends_on:
      - kafka
    volumes:
      - ./kafka:/kafka
      - ./output:/output
    command: "bash -c 'chmod +x /kafka/topic_creator.sh && \
                       /kafka/topic_creator.sh /kafka/topics.txt $$K_ZK $$K_KAFKA && \
                       touch \"$${K_OUTPUT_FILE}\" && chmod a+rw  \"$${K_OUTPUT_FILE}\"'"
    environment:
      K_ZK: localhost:22181
      K_KAFKA: localhost:19092
      K_OUTPUT_FILE: "/output/kafka-done.txt"

      # dummy values
      KAFKA_BROKER_ID: ignored
      KAFKA_ZOOKEEPER_CONNECT: ignored
    network_mode: host
  1. To run everything in the right order

bash script snippet:

K_SETUP_OUTPUT="./output"
mkdir -p "$K_SETUP_OUTPUT"
rm -f "$K_SETUP_OUTPUT/kafka-done.txt"

# Start stuff
docker-compose up -d --force-recreate --build --remove-orphans

wait_for_file "$K_SETUP_OUTPUT/kafka-done.txt"
sleep 5
# do your stuff here (e.g.  read -r -p "Press any key to continue..." key )
do_something

with wait_for_file function

function wait_for_file {
    local name=$1
    echo "File waiting ${name}."
    seconds=0
    while [[ "$seconds" -lt "$timeout"  && ! -f "$name" ]];
    do
      echo -n .
      seconds=$((seconds+1))
      sleep 1
    done

    if [ "$seconds" -lt "$timeout" ]; then
      echo "${name} created (${seconds}s)!"
    else
      echo "  ERROR: not found ${name}" >&2
      exit 1
    fi
}

How it works in sequence:

  1. run the test script
  2. create folders and delete ./output/kafka-done.txt
  3. execute docker-compose up
  4. in kafka-setup first wait for availability of zookeeper and kafka ports
  5. run ./kafka/topic_creator.sh with parameters for zookeeper and kafka ports
  6. create ./output/kafka-done.txt
  7. wait_for_file ./output/kafka-done.txt succeeds
  8. do_something .. that's your tests or whatever

Variant 2: Just allow docker to run without root

See Manage Docker as a non-root user

I needed to solve the topic creation myself and as I had the liberty of choosing kafka image of my preference I chose wurstmeister kafka docker image which allows for topic specification using env variable KAFKA_CREATE_TOPICS like so:

version: '2'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181:2181"
  kafka:
    build: .
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
      KAFKA_CREATE_TOPICS: "test:1:1"
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

Here is a link to docker-compose example .

Another advantage for me was ARM64 version being available, just had to switch the zookeeper to official one .

ubuntu container doesn't have docker installed.

It also doesn't have the kafka-topics command, so instead you should re-use the cp-enterprise-kafka image that you've already pulled and change the ENTRYPOINT or CMD to be your script but running the kafka-topics command directly

Or replace your Kafka container with wurstmeister/kafka and add an environment variable for creating the topics

请在 kafka 服务的撰写文件中包含 kafka 容器卷中的 docker 路径 add -v $(which docker):/usr/bin/docker

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM