[英]Why is http://localhost:9021/ not opening the Confluent Control Center?
I am working on the Confluent Admin training and running labs in Docker for Desktop.我正在 Docker for Desktop 中进行 Confluent Admin 培训和运行实验室。 PFA the docker-compose yaml file.
PFA docker-compose yaml 文件。 The Confluent Control Center doesn't open in brower.
Confluent 控制中心未在浏览器中打开。 I am using http://localhost:9021 to open.
我正在使用 http://localhost:9021 打开。 Ealier it used to open but not any more.
更早它曾经打开但不再打开。 The only change I have done in my computer is to install McAfee Live Safe.
我对计算机所做的唯一更改是安装 McAfee Live Safe。 I even tried by turning off the Firewall, but it didn't help either.
我什至尝试关闭防火墙,但也无济于事。
Can someone please share if you had similar experience and how you overcame this issue?如果您有类似的经历以及您如何克服这个问题,有人可以分享吗?
docker-compose.yaml file. docker-compose.yaml 文件。
version: "3.5"
services:
zk-1:
image: confluentinc/cp-zookeeper:5.3.1
hostname: zk-1
container_name: zk-1
ports:
- "12181:2181"
volumes:
- data-zk-log-1:/var/lib/zookeeper/log
- data-zk-data-1:/var/lib/zookeeper/data
networks:
- confluent
environment:
- ZOOKEEPER_SERVER_ID=1
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
- ZOOKEEPER_INIT_LIMIT=5
- ZOOKEEPER_SYNC_LIMIT=2
- ZOOKEEPER_SERVERS=zk-1:2888:3888;zk-2:2888:3888;zk-3:2888:3888
zk-2:
image: confluentinc/cp-zookeeper:5.3.1
hostname: zk-2
container_name: zk-2
ports:
- "22181:2181"
volumes:
- data-zk-log-2:/var/lib/zookeeper/log
- data-zk-data-2:/var/lib/zookeeper/data
networks:
- confluent
environment:
- ZOOKEEPER_SERVER_ID=2
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
- ZOOKEEPER_INIT_LIMIT=5
- ZOOKEEPER_SYNC_LIMIT=2
- ZOOKEEPER_SERVERS=zk-1:2888:3888;zk-2:2888:3888;zk-3:2888:3888
zk-3:
image: confluentinc/cp-zookeeper:5.3.1
hostname: zk-3
container_name: zk-3
ports:
- "32181:2181"
volumes:
- data-zk-log-3:/var/lib/zookeeper/log
- data-zk-data-3:/var/lib/zookeeper/data
networks:
- confluent
environment:
- ZOOKEEPER_SERVER_ID=3
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
- ZOOKEEPER_INIT_LIMIT=5
- ZOOKEEPER_SYNC_LIMIT=2
- ZOOKEEPER_SERVERS=zk-1:2888:3888;zk-2:2888:3888;zk-3:2888:3888
kafka-1:
image: confluentinc/cp-enterprise-kafka:5.3.1
hostname: kafka-1
container_name: kafka-1
ports:
- "19092:9092"
networks:
- confluent
volumes:
- data-kafka-1:/var/lib/kafka/data
environment:
KAFKA_BROKER_ID: 101
KAFKA_ZOOKEEPER_CONNECT: zk-1:2181,zk-2:2181,zk-3:2181
KAFKA_LISTENERS: DOCKER://kafka-1:9092,HOST://kafka-1:19092
KAFKA_ADVERTISED_LISTENERS: DOCKER://kafka-1:9092,HOST://localhost:19092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: DOCKER:PLAINTEXT,HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: DOCKER
KAFKA_METRIC_REPORTERS: "io.confluent.metrics.reporter.ConfluentMetricsReporter"
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: "kafka-1:9092,kafka-2:9092,kafka-3:9092"
kafka-2:
image: confluentinc/cp-enterprise-kafka:5.3.1
hostname: kafka-2
container_name: kafka-2
ports:
- "29092:9092"
networks:
- confluent
volumes:
- data-kafka-2:/var/lib/kafka/data
environment:
KAFKA_BROKER_ID: 102
KAFKA_ZOOKEEPER_CONNECT: zk-1:2181,zk-2:2181,zk-3:2181
KAFKA_LISTENERS: DOCKER://kafka-2:9092,HOST://kafka-2:29092
KAFKA_ADVERTISED_LISTENERS: DOCKER://kafka-2:9092,HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: DOCKER:PLAINTEXT,HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: DOCKER
KAFKA_METRIC_REPORTERS: "io.confluent.metrics.reporter.ConfluentMetricsReporter"
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: "kafka-1:9092,kafka-2:9092,kafka-3:9092"
kafka-3:
image: confluentinc/cp-enterprise-kafka:5.3.1
hostname: kafka-3
container_name: kafka-3
ports:
- "39092:9092"
networks:
- confluent
volumes:
- data-kafka-3:/var/lib/kafka/data
environment:
KAFKA_BROKER_ID: 103
KAFKA_ZOOKEEPER_CONNECT: zk-1:2181,zk-2:2181,zk-3:2181
KAFKA_LISTENERS: DOCKER://kafka-3:9092,HOST://kafka-3:39092
KAFKA_ADVERTISED_LISTENERS: DOCKER://kafka-3:9092,HOST://localhost:39092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: DOCKER:PLAINTEXT,HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: DOCKER
KAFKA_METRIC_REPORTERS: "io.confluent.metrics.reporter.ConfluentMetricsReporter"
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: "kafka-1:9092,kafka-2:9092,kafka-3:9092"
schema-registry:
image: confluentinc/cp-schema-registry:5.3.1
hostname: schema-registry
container_name: schema-registry
ports:
- "8081:8081"
networks:
- confluent
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "kafka-1:9092,kafka-2:9092,kafka-3:9092"
SCHEMA_REGISTRY_LISTENERS: "http://schema-registry:8081,http://localhost:8081"
# Uses incorrect container utility belt (CUB) environment variables due to bug.
# See https://github.com/confluentinc/cp-docker-images/issues/807. A fix was merged that
# will be available in the CP 5.4 image.
KAFKA_REST_CUB_KAFKA_TIMEOUT: 120
KAFKA_REST_CUB_KAFKA_MIN_BROKERS: 3
connect:
image: confluentinc/cp-kafka-connect:5.3.1
hostname: connect
container_name: connect
ports:
- "8083:8083"
volumes:
- ./data:/data
networks:
- confluent
environment:
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
CONNECT_BOOTSTRAP_SERVERS: kafka-1:9092,kafka-2:9092,kafka-3:9092
CONNECT_GROUP_ID: "connect"
CONNECT_CONFIG_STORAGE_TOPIC: "connect-config"
CONNECT_OFFSET_STORAGE_TOPIC: "connect-offsets"
CONNECT_STATUS_STORAGE_TOPIC: "connect-status"
CONNECT_KEY_CONVERTER: "io.confluent.connect.avro.AvroConverter"
CONNECT_VALUE_CONVERTER: "io.confluent.connect.avro.AvroConverter"
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_REST_ADVERTISED_HOST_NAME: "connect"
CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
CONNECT_LOG4J_LOGGERS: org.reflections=ERROR
CONNECT_PLUGIN_PATH: /usr/share/java
CONNECT_REST_HOST_NAME: "connect"
CONNECT_REST_PORT: 8083
CONNECT_CUB_KAFKA_TIMEOUT: 120
ksql-server:
image: confluentinc/cp-ksql-server:5.3.1
hostname: ksql-server
container_name: ksql-server
ports:
- "8088:8088"
networks:
- confluent
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_LOG4J_OPTS: "-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties"
KSQL_BOOTSTRAP_SERVERS: kafka-1:9092,kafka-2:9092,kafka-3:9092
KSQL_HOST_NAME: ksql-server
KSQL_APPLICATION_ID: "etl-demo"
KSQL_LISTENERS: "http://0.0.0.0:8088"
# Set the buffer cache to 0 so that the KSQL CLI shows all updates to KTables for learning purposes.
# The default is 10 MB, which means records in a KTable are compacted before showing output.
# Change cache.max.bytes.buffering and commit.interval.ms to tune this behavior.
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
control-center:
image: confluentinc/cp-enterprise-control-center:5.3.1
hostname: control-center
container_name: control-center
restart: always
networks:
- confluent
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: kafka-1:9092,kafka-2:9092,kafka-3:9092
CONTROL_CENTER_ZOOKEEPER_CONNECT: zk-1:2181,zk-2:2181,zk-3:2181
CONTROL_CENTER_STREAMS_NUM_STREAM_THREADS: 4
CONTROL_CENTER_REPLICATION_FACTOR: 3
CONTROL_CENTER_CONNECT_CLUSTER: "connect:8083"
CONTROL_CENTER_KSQL_URL: "http://ksql-server:8088"
CONTROL_CENTER_KSQL_ADVERTISED_URL: "http://localhost:8088"
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
tools:
image: cnfltraining/training-tools:5.3
hostname: tools
container_name: tools
volumes:
- ${PWD}/:/apps
working_dir: /apps
networks:
- confluent
command: /bin/bash
tty: true
volumes:
data-zk-log-1:
data-zk-data-1:
data-zk-log-2:
data-zk-data-2:
data-zk-log-3:
data-zk-data-3:
data-kafka-1:
data-kafka-2:
data-kafka-3:
networks:
confluent:
All the docker containers are up and running;所有 docker 容器都已启动并运行; all respective confluent services are up.
所有相应的融合服务都已启动。
Thanks !!谢谢 !!
Finally...I got an answer to this from Confluent Support.最后......我从 Confluent Support 那里得到了答案。
The version of control center in the labs expires after 30 days.实验室中控制中心的版本将在 30 天后过期。
This can reset by removing all the containers and volumes on the PC.这可以通过删除 PC 上的所有容器和卷来重置。
docker-compose down -v
will exit and remove all the containers and volumes. docker-compose down -v
将退出并删除所有容器和卷。docker-compose up -d
command.docker-compose up -d
命令。 Now give a minute or two before opening the Control Center in any browser.现在在任何浏览器中打开控制中心之前一两分钟。 PS Docker should have been given at least 6GB of memory to run all the containers.
PS Docker 应该至少有 6GB 的内存来运行所有容器。
Thanks.谢谢。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.