简体   繁体   English

Kafka Kubernetes:活动代理的数量“0”不满足偏移主题所需的复制因子“1”

[英]Kafka Kubernetes: Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic

I'm trying to set up a Kafka pod in Kubernetes but I keep getting this error:我正在尝试在 Kubernetes 中设置一个 Kafka pod,但我不断收到此错误:

[2020-08-30 11:23:39,354] ERROR [KafkaApi-0] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)

This is my Kafka deployment:这是我的 Kafka 部署:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka
  namespace: instagnam
  labels:
    app: instagnam
    service: kafka
spec:
  replicas: 1
  selector:
    matchLabels:
      app: instagnam
  template:
    metadata:
      labels:
        app: instagnam
        service: kafka
        id: "0"
    spec:
      containers:
       - name: kafka
         image: wurstmeister/kafka
         imagePullPolicy: Always
         ports:
         - containerPort: 9092
           name: kafka
         env:
         - name: KAFKA_PORT
           value: "9092"
         - name: KAFKA_ADVERTISED_PORT
           value: "9092"
         - name: KAFKA_ADVERTISED_HOST_NAME
           value: kafka
         - name: KAFKA_ZOOKEEPER_CONNECT
           value: zookeeper:2181
         - name: KAFKA_CREATE_TOPICS
           value: connessioni:2:1,ricette:2:1
         - name: KAFKA_BROKER_ID
           value: "0"

This is my Kafka service:这是我的 Kafka 服务:

apiVersion: v1
kind: Service
metadata:
  name: kafka
  namespace: instagnam
  labels:
    app: instagnam
    service: kafka
spec:
  selector:
    app: instagnam
    service: kafka
    id: "0"
  type: LoadBalancer
  ports:
  - name: kafka
    protocol: TCP
    port: 9092

This is my Zookeeper deployment:这是我的 Zookeeper 部署:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: zookeeper
  namespace: instagnam
  labels:
    app: instagnam
    service: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      app: instagnam
      service: zookeeper
  template:
    metadata:
      labels:
        app: instagnam
        service: zookeeper
    spec:
      containers:
      - image: wurstmeister/zookeeper
        name: zookeeper
        imagePullPolicy: Always
        ports:
        - containerPort: 2181
        env:
        - name: ZOOKEEPER_ID
          value: "1"
        - name: ZOOKEEPER_SERVER_1
          value: zookeeper

And this is my Zookeeper service: apiVersion: v1这是我的 Zookeeper 服务:apiVersion: v1

kind: Service
metadata:
  name: zookeeper
  namespace: instagnam
spec:
  selector:
    app: instagnam
    service: zookeeper
  ports:
  - name: client
    protocol: TCP
    port: 2181
  - name: follower
    protocol: TCP
    port: 2888
  - name: leader
    protocol: TCP
    port: 3888

What am I doing wrong here?我在这里做错了什么?

If you need the full Kafka log here it is: https://pastebin.com/eBu8JB8A如果你需要完整的 Kafka 日志,这里是: https://pastebin.com/eBu8JB8A

And there are the Zookeper logs if you need them too: https://pastebin.com/gtnxSftW如果你也需要的话,还有 Zookeper 日志: https://pastebin.com/gtnxSftW

EDIT: I'm running this on minikube if this can help.编辑:如果有帮助,我将在 minikube 上运行它。

Kafka broker.id changes maybe cause this problem. Kafka broker.id 更改可能会导致此问题。 Clean up the kafka metadata under zk, deleteall /brokers... note: kafka data will be lost清理zk下的kafka metadata,deleteall /brokers...注意:kafka数据会丢失

Assuming that you're on the same Kafka image.假设您在同一个 Kafka 图像上。 The solution that fixed the issue for me was:为我解决问题的解决方案是:

Replacing the deprecated settings of KAFKA_ADVERTISED_PORT and KAFKA_ADVERTISED_HOST_NAME as detailed in the docker image README see current docs (or README commit pinned ).替换已弃用的KAFKA_ADVERTISED_PORTKAFKA_ADVERTISED_HOST_NAME设置,详见 docker 映像 README 参见当前文档(或README commit pinned )。 With KAFKA_ADVERTISED_LISTENERS and KAFKA_LISTENERS on which I had to add an "inside" and "outside" configurations.使用KAFKA_ADVERTISED_LISTENERSKAFKA_LISTENERS ,我必须在其上添加“内部”和“外部”配置。

Summarized from https://github.com/wurstmeister/kafka-docker/issues/218#issuecomment-362327563总结自https://github.com/wurstmeister/kafka-docker/issues/218#issuecomment-362327563

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Kafka Kraft 复制因子为 3 - Kafka Kraft replication factor of 3 通过AWS上的ELB在Kubernetes上公开单独的Kafka经纪人 - Expose individual Kafka brokers on Kubernetes through an ELB on AWS Kubernetes-Kafka无法就主题撰写消息 - Kubernetes-Kafka unable to write message on topic Kubernetes 如何控制复制? - How does Kubernetes control replication? 在所有 kafka-pod 升级后,java 中的 kafka 消费者客户端无法重新连接到 kubernetes kafka 代理 - kafka consumer client in java can't reconnect to kubernetes kafka brokers after all of kafka-pods are upgraded kubernetes复制控制器如何处理数据? - How does kubernetes replication controller handle data? 无法连接到卡夫卡经纪人 - Not able to connect to kafka brokers Kubernetes上的Kafka-UNKNOWN_TOPIC_OR_PARTITION和LEADER_NOT_AVAILABLE错误 - Kafka on Kubernetes - UNKNOWN_TOPIC_OR_PARTITION and LEADER_NOT_AVAILABLE error Kafka重新启动后,kubernetes上的``分区中的领导者经纪人没有匹配的侦听器'' - 'partitions have leader brokers without a matching listener' on kubernetes after kafka restart 在 kubernetes 配置中更改 Kafka 主题的 KAFKA_CFG_NUM_PARTITIONS 值 - Change KAFKA_CFG_NUM_PARTITIONS value for a Kafka Topic in kubernetes configuration
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM