简体   繁体   English

kafka-reassign-partitions --generate for topic __commit_offsets 给了我奇怪的结果:分区副本只在一个代理上

[英]kafka-reassign-partitions --generate for topic __commit_offsets gives me strange result: partition replica only on one broker anyway

kafka-reassign-partitions --generate for topic __commit_offsets gives me strange result: partition replica only on one broker anyway, but expected 3 replicas for each partition. kafka-reassign-partitions --generate 主题 __commit_offsets 给了我奇怪的结果:无论如何,分区副本只在一个代理上,但预计每个分区有 3 个副本。

I do the following:我执行以下操作:

test@kafka-1:~/Kafka-Docker$ sudo docker exec -it broker bash
[appuser@broker ~]$ echo '{"version":1, "topics":[{"topic":"__consumer_offsets"}]}' > topics-to-move.json
[appuser@broker ~]$ kafka-reassign-partitions --zookeeper $KAFKA_ZOOKEEPER_CONNECT --topics-to-move-json-file topics-to-move.json --broker-list "1,2,3" --generate
Warning: --zookeeper is deprecated, and will be removed in a future version of Kafka.
Current partition replica assignment
{"version":1,"partitions":[{"topic":"__consumer_offsets","partition":0,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":1,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":2,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":3,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":4,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":5,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":6,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":7,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":8,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":9,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":10,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":11,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":12,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":13,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":14,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":15,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":16,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":17,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":18,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":19,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":20,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":21,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":22,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":23,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":24,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":25,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":26,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":27,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":28,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":29,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":30,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":31,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":32,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":33,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":34,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":35,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":36,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":37,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":38,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":39,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":40,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":41,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":42,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":43,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":44,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":45,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":46,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":47,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":48,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":49,"replicas":[1],"log_dirs":["any"]}]}

Proposed partition reassignment configuration
{"version":1,"partitions":[{"topic":"__consumer_offsets","partition":0,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":1,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":2,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":3,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":4,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":5,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":6,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":7,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":8,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":9,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":10,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":11,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":12,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":13,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":14,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":15,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":16,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":17,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":18,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":19,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":20,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":21,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":22,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":23,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":24,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":25,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":26,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":27,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":28,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":29,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":30,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":31,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":32,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":33,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":34,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":35,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":36,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":37,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":38,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":39,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":40,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":41,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":42,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":43,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":44,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":45,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":46,"replicas":[2],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":47,"replicas":[3],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":48,"replicas":[1],"log_dirs":["any"]},{"topic":"__consumer_offsets","partition":49,"replicas":[2],"log_dirs":["any"]}]}
[appuser@broker ~]$

I see that zookeepers option is deprecated so I also tried this (with the same result):我看到zookeepers选项已被弃用,所以我也尝试了这个(结果相同):

kafka-reassign-partitions --bootstrap-server kafka-1:9092  --topics-to-move-json-file topics-to-move.json --broker-list "1,2,3" --generate

PROBLEM:问题:

As you may see, "Current partition replica assignment" has "replicas: [1]" only and "Proposed partition reassignment configuration" has "replicas: [1]" or "replicas: [2]" or "replicas: [3]" for each partition.如您所见,“当前分区副本分配”只有“副本:[1]”,“建议的分区重新分配配置”具有“副本:[1]”或“副本:[2]”或“副本:[3]” "对于每个分区。

But for "Proposed partition reassignment configuration" it expected to be something like "replicas: [1,2,3]" or "replicas: [2,3,1]" and so on for redudancy of __commit_offsets across 3 brokers.但是对于“提议的分区重新分配配置”,它期望类似于“副本:[1,2,3]”或“副本:[2,3,1]”等,以减少 3 个代理之间的 __commit_offsets。

Why only one replica for each partiotion is bound to one broker only?为什么每个分区只有一个副本只绑定到一个代理? Why 3 replicas for each partition are not spreaded across 3 brokers?为什么每个分区的 3 个副本不分布在 3 个代理上?

Example of good topic (but it has only one partition) replicated across all 3 brokers, it looks this way:在所有 3 个代理上复制的好主题(但它只有一个分区)的示例,它看起来像这样:

{"version":1,"partitions":[{"topic":"test_topic","partition":0,"replicas":[1,2,3],"log_dirs":["any","any","any"]}]}

My setup: 3 VMs 192.168.1.11, .12 and.13 each has zookeeper and broker, zookeepers connected to quorum ok - I see only one leader through its REST API (for example http://192.168.1.11:4888/commands/stats same for .12 and .13 IPs) and that's good.我的设置: 3 个虚拟机 192.168.1.11、.12 和 .13 每个都有动物园管理员和代理,动物园管理员连接到法定人数 - 我通过其 REST API 只看到一个领导者(例如http://192.168.1.11:4888/commands/ .12 和 .13 IP 的统计数据相同),这很好。 Also tested switch off one of zookeepers and new leader elected in that case - works perfect.还测试了关闭动物园管理员之一并在这种情况下选出的新领导者 - 完美无缺。

Example of docker-compose.yml for 192.168.1.11 VM is (other similar): 192.168.1.11 VM 的 docker-compose.yml 示例是(其他类似的):

version: '3.7'

x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092,kafka-2:9092,kafka-3:9092"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
      - "4888:8080"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_PEER_PORT: 2888
      ZOOKEEPER_LEADER_PORT: 3888
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: *zoo
    volumes:
      - ./kafka-data/zookeeper:/var/lib/zookeeper/data
      - ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
    networks:
      - mynet

  broker:
    image: confluentinc/cp-kafka:6.2.1
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    extra_hosts:
      - "kafka-1:192.168.1.11"
      - "kafka-2:192.168.1.12"
      - "kafka-3:192.168.1.13"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENERS: OUTSIDE://0.0.0.0:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: OUTSIDE:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: OUTSIDE://192.168.1.11:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: OUTSIDE
      KAFKA_ZOOKEEPER_CONNECT: *kafkaZookeepers
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3
      KAFKA_DEFAULT_REPLICATION_FACTOR: 3
      KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: 'LogAppendTime'
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_CONNECTIONS_MAX_IDLE_MS: 31536000000 # 1 year
    volumes:
      - ./kafka-data/kafka:/var/lib/kafka/data
    networks:
      - mynet

networks:
  mynet:
    driver: bridge

It was set up first with:它首先设置为:

  1. KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR:1
  2. KAFKA_DEFAULT_REPLICATION_FACTOR: 1 KAFKA_DEFAULT_REPLICATION_FACTOR:1

Then +2 other setups on .12 and .13 VMs added (of course with different ZOOKEEPER_SERVER_ID and KAFKA_BROKER_ID ) and on each KAFKA_DEFAULT_REPLICATION_FACTOR and KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR changed from 1 to 3.然后在 .12 和 .13 虚拟机上添加了 +2 个其他设置(当然具有不同的ZOOKEEPER_SERVER_IDKAFKA_BROKER_ID )并且在每个KAFKA_DEFAULT_REPLICATION_FACTORKAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR 上从 1 更改为 3。

Then new topics created OK with replication factor 3, also I was able easily change replication factor to 3 for my existing non-system topics, but except topic __commit_offsets of cource - it need to be reassigned using kafka-reassign-partitions command line tool.然后使用复制因子 3 创建新主题,我也可以轻松地将现有非系统主题的复制因子更改为 3,但除了主题 __commit_offsets 之外 - 它需要使用 kafka-reassign-partitions 命令行工具重新分配。

The offsets topic only gets created on a brand new cluster. offsets 主题只会在一个全新的集群上创建。 It will not retroactively get updated if you modify offsets.topic.replication.factor .如果您修改offsets.topic.replication.factor ,它将不会追溯更新。

Similarly, any topic created before you increased default.replication.factor will also only have one replica.同样,在您增加default.replication.factor之前创建的任何主题也将只有一个副本。

Reassign partition tool doesn't take into account your server properties.重新分配分区工具不会考虑您的服务器属性。 You either need a different tool or manually place the replicas yourself.您要么需要不同的工具,要么自己手动放置副本。 - How to change the number of replicas of a Kafka topic? - 如何更改 Kafka 主题的副本数量?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM