简体   繁体   English

在docker容器中运行的kafka-console-consumer停止使用消息

[英]kafka-console-consumer running in docker container stops consuming messages

I have confluent kafka (v4.0.0) brokers and 3 zookeepers running in docker-compose . 我有融合的kafka(v4.0.0)代理和3个在docker -compose中运行的zookeeper。 A test topic was created with 10 partitions with replication factor 3. When a console-consumer is created w/o passing --group option (where group.id will be assigned automatically), it can continuously consume messages even after a broker is killed and the broker is back on line. 创建了一个具有10个具有复制因子3的分区的测试主题。在不使用--group选项(其中group.id将自动分配)的情况下创建控制台用户时,即使杀死了代理,它也可以连续使用消息经纪人又重新上线了。

However, if I create a console-consumer with --group option ('console-group'), message consumption is stopped after a kafka broker is killed. 但是,如果我使用--group选项(“ console-group”)创建控制台用户,则在终止kafka代理后,消息消耗将停止。

$ docker run --net=host confluentinc/cp-kafka:4.0.0 kafka-console-consumer --bootstrap-server localhost:19092,localhost:29092,localhost:39092 --topic starcom.status --from-beginning --group console-group
<< some messages consumed >>
<< broker got killed >> 
[2017-12-31 18:34:05,344] WARN [Consumer clientId=consumer-1, groupId=console-group] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
<< no message after this >> 

Even after the broker is back on line, the consumer-group doesn't consume any further messages. 即使在经纪人恢复在线之后,消费者组也不会消耗任何其他消息。

Strange thing is that there's no lag for that consumer group when I checked with kafka-consumer-groups tool. 奇怪的是,当我使用kafka-consumer-groups工具进行检查时,该消费群体没有任何滞后。 In other words, consumer offsets are advancing for that consumer group. 换句话说,针对该消费者群体的消费者补偿正在提高。 There's no other consumer running with the group.id, so something is wrong. 没有其他使用者与group.id一起运行,所以出了点问题。

Based on logs, it seems that group had been stabilized. 根据日志,该组似乎已稳定下来。

kafka-2_1           | [2017-12-31 17:35:40,743] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 0 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 17:35:43,746] INFO [GroupCoordinator 2]: Stabilized group console-group generation 1 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 17:35:43,765] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 1 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 17:54:30,228] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 1 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 17:54:31,162] INFO [GroupCoordinator 2]: Stabilized group console-group generation 2 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 17:54:31,173] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 2 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 17:57:25,273] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 2 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 17:57:28,256] INFO [GroupCoordinator 2]: Stabilized group console-group generation 3 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 17:57:28,267] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 3 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 17:57:53,594] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 3 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 17:57:55,322] INFO [GroupCoordinator 2]: Stabilized group console-group generation 4 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 17:57:55,336] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 4 (kafka.coordinator.group.GroupCoordinator)
kafka-3_1           | [2017-12-31 18:15:07,953] INFO [GroupCoordinator 3]: Preparing to rebalance group console-group-2 with old generation 0 (__consumer_offsets-22) (kafka.coordinator.group.GroupCoordinator)
kafka-3_1           | [2017-12-31 18:15:10,987] INFO [GroupCoordinator 3]: Stabilized group console-group-2 generation 1 (__consumer_offsets-22) (kafka.coordinator.group.GroupCoordinator)
kafka-3_1           | [2017-12-31 18:15:11,044] INFO [GroupCoordinator 3]: Assignment received from leader for group console-group-2 for generation 1 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:08:59,087] INFO [GroupCoordinator 2]: Loading group metadata for console-group with generation 4 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:09:02,453] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 4 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:09:03,309] INFO [GroupCoordinator 2]: Stabilized group console-group generation 5 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:09:03,471] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 5 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:10:32,010] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 5 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:10:34,006] INFO [GroupCoordinator 2]: Stabilized group console-group generation 6 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:10:34,040] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 6 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:12:02,014] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 6 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:12:09,449] INFO [GroupCoordinator 2]: Stabilized group console-group generation 7 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:12:09,466] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 7 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:16:29,277] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 7 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:16:31,924] INFO [GroupCoordinator 2]: Stabilized group console-group generation 8 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:16:31,945] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 8 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:17:54,813] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 8 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:18:01,256] INFO [GroupCoordinator 2]: Stabilized group console-group generation 9 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:18:01,278] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 9 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:33:47,316] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 9 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:33:49,709] INFO [GroupCoordinator 2]: Stabilized group console-group generation 10 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:33:49,745] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 10 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:34:05,484] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 10 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:34:07,845] INFO [GroupCoordinator 2]: Stabilized group console-group generation 11 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 18:34:07,865] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 11 (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 19:34:16,436] INFO [GroupCoordinator 2]: Preparing to rebalance group console-group with old generation 11 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 19:34:18,221] INFO [GroupCoordinator 2]: Stabilized group console-group generation 12 (__consumer_offsets-33) (kafka.coordinator.group.GroupCoordinator)
kafka-2_1           | [2017-12-31 19:34:18,248] INFO [GroupCoordinator 2]: Assignment received from leader for group console-group for generation 12 (kafka.coordinator.group.GroupCoordinator) 

And topic replication happened all normally. 和主题复制都正常发生。

$ docker run --net=host confluentinc/cp-kafka:4.0.0 kafka-topics --zookeeper localhost:22181 --topic starcom.status --describe
Topic:starcom.status    PartitionCount:10       ReplicationFactor:3     Configs:
        Topic: starcom.status   Partition: 0    Leader: 3       Replicas: 3,1,2 Isr: 2,3,1
        Topic: starcom.status   Partition: 1    Leader: 1       Replicas: 1,2,3 Isr: 3,2,1
        Topic: starcom.status   Partition: 2    Leader: 2       Replicas: 2,3,1 Isr: 3,2,1
        Topic: starcom.status   Partition: 3    Leader: 3       Replicas: 3,2,1 Isr: 3,2,1
        Topic: starcom.status   Partition: 4    Leader: 1       Replicas: 1,3,2 Isr: 3,2,1
        Topic: starcom.status   Partition: 5    Leader: 2       Replicas: 2,1,3 Isr: 3,2,1
        Topic: starcom.status   Partition: 6    Leader: 3       Replicas: 3,1,2 Isr: 2,3,1
        Topic: starcom.status   Partition: 7    Leader: 1       Replicas: 1,2,3 Isr: 3,2,1
        Topic: starcom.status   Partition: 8    Leader: 2       Replicas: 2,3,1 Isr: 3,2,1
        Topic: starcom.status   Partition: 9    Leader: 3       Replicas: 3,2,1 Isr: 3,2,1

$ docker run --net=host confluentinc/cp-kafka:4.0.0 kafka-topics --zookeeper localhost:22181 --topic starcom.status --describe
Topic:starcom.status    PartitionCount:10       ReplicationFactor:3     Configs:
        Topic: starcom.status   Partition: 0    Leader: 3       Replicas: 3,1,2 Isr: 2,3
        Topic: starcom.status   Partition: 1    Leader: 2       Replicas: 1,2,3 Isr: 3,2
        Topic: starcom.status   Partition: 2    Leader: 2       Replicas: 2,3,1 Isr: 3,2
        Topic: starcom.status   Partition: 3    Leader: 3       Replicas: 3,2,1 Isr: 3,2
        Topic: starcom.status   Partition: 4    Leader: 3       Replicas: 1,3,2 Isr: 3,2
        Topic: starcom.status   Partition: 5    Leader: 2       Replicas: 2,1,3 Isr: 3,2
        Topic: starcom.status   Partition: 6    Leader: 3       Replicas: 3,1,2 Isr: 2,3
        Topic: starcom.status   Partition: 7    Leader: 2       Replicas: 1,2,3 Isr: 3,2
        Topic: starcom.status   Partition: 8    Leader: 2       Replicas: 2,3,1 Isr: 3,2
        Topic: starcom.status   Partition: 9    Leader: 3       Replicas: 3,2,1 Isr: 3,2

$ docker run --net=host confluentinc/cp-kafka:4.0.0 kafka-topics --zookeeper localhost:22181 --topic starcom.status --describe
Topic:starcom.status    PartitionCount:10       ReplicationFactor:3     Configs:
        Topic: starcom.status   Partition: 0    Leader: 3       Replicas: 3,1,2 Isr: 2,3,1
        Topic: starcom.status   Partition: 1    Leader: 1       Replicas: 1,2,3 Isr: 3,2,1
        Topic: starcom.status   Partition: 2    Leader: 2       Replicas: 2,3,1 Isr: 3,2,1
        Topic: starcom.status   Partition: 3    Leader: 3       Replicas: 3,2,1 Isr: 3,2,1
        Topic: starcom.status   Partition: 4    Leader: 1       Replicas: 1,3,2 Isr: 3,2,1
        Topic: starcom.status   Partition: 5    Leader: 2       Replicas: 2,1,3 Isr: 3,2,1
        Topic: starcom.status   Partition: 6    Leader: 3       Replicas: 3,1,2 Isr: 2,3,1
        Topic: starcom.status   Partition: 7    Leader: 1       Replicas: 1,2,3 Isr: 3,2,1
        Topic: starcom.status   Partition: 8    Leader: 2       Replicas: 2,3,1 Isr: 3,2,1
        Topic: starcom.status   Partition: 9    Leader: 3       Replicas: 3,2,1 Isr: 3,2,1

Is this a limitation of (confluent) kafka console consumer? 这是(融合)kafka控制台使用者的限制吗? Basically, I am trying to ensure that my real Java Kafka consumer(s) can survive broker downtime by running this smaller test. 基本上,我正在尝试通过运行此较小的测试来确保真正的Java Kafka使用者可以在代理停机期间幸免。

Any help will be appreciated. 任何帮助将不胜感激。

EDIT (year 2018!) : 编辑(2018年!)

I completely recreated my docker(-compose) environment and was able to reproduce this. 我完全重新创建了docker(-compose)环境,并能够重现此环境。 This time I created 'new-group' consumer group and the console consumer was throwing below error after broker restarted. 这次,我创建了“新组”使用者组,并且在代理重新启动后,控制台使用者正在错误以下。 And since then, messages are not consumed. 从那时起,就不再使用邮件了。 Again, according to consumer-group tool, consumer offsets are moving forward. 再次,根据消费者组工具,消费者补偿正在向前发展。

[2018-01-01 19:18:32,935] ERROR [Consumer clientId=consumer-1, groupId=new-group] Offset commit failed on partition starcom.status-4 at offset 0: This is not the correct coordinator. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2018-01-01 19:18:32,936] WARN [Consumer clientId=consumer-1, groupId=new-group] Asynchronous auto-commit of offsets {starcom.status-4=OffsetAndMetadata{offset=0, metadata=''}, starcom.status-5=OffsetAndMetadata{offset=0, metadata=''}, starcom.status-6=OffsetAndMetadata{offset=2, metadata=''}} failed: Offset commit failed with a retriable exception. You should retry committing offsets. The underlying error was: This is not the correct coordinator. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)

It turned out to be a docker newbie error. 原来是docker newbie错误。

When I ctrl + c ed on kafka-console-consumer shell, the container (group.id: "console-group") was put into detached mode. 当我在kafka-console-consumer外壳上按ctrl + c ed时,容器(group.id:“console-group”)进入分离模式。 I didn't know that until I ran the docker ps [-n | -a] 在运行docker ps [-n | -a] docker ps [-n | -a] command. docker ps [-n | -a]命令。 When I started another console-consumer using the same command ( docker run --net=host confluentinc/cp-kafka:4.0.0 kafka-console-consumer --bootstrap-server localhost:19092,localhost:29092,localhost:39092 --topic starcom.status --from-beginning --group console-group ), the consumer joined the same "console-group". 当我使用相同的命令启动另一个控制台用户时( docker run --net=host confluentinc/cp-kafka:4.0.0 kafka-console-consumer --bootstrap-server localhost:19092,localhost:29092,localhost:39092 --topic starcom.status --from-beginning --group console-group ),使用者加入了相同的“控制台组”。 That's why subsequent messages (obviously I was producing messages with the same partitioning key) were consumed by the first consumer running in the background and gave me false impression that message was being lost. 这就是为什么后续消息(很明显,我正在使用相同的分区键生成消息)被在后台运行的第一个使用者使用,并给我一种错误的印象,即该消息已丢失。 And that's why consumer-groups command showed correct offset advancement. 这就是为什么消费者组命令显示正确的偏移量进度的原因。 After re-attaching the original consumer to foreground ( docker attach <<container-id>> ) in different window, now I see all the produced messages are consumed in two different consoles based on the partition assignment. 在将原始使用者重新附加到不同窗口中的前台( docker attach <<container-id>> )之后,现在我看到所有产生的消息都根据分区分配在两个不同的控制台中使用。 Everything worked as expected. 一切都按预期进行。 Sorry for the false-alarm but hopefully someone who runs into the same issue get some hint from this. 对错误警报表示抱歉,但希望遇到相同问题的人可以从中得到一些提示。

To summarize, if I wanted to consume a few messages, correct way to set up a kafka-console-consumer in docker environment should've been 总而言之,如果我想消耗一些消息,则应该在docker环境中设置kafka-console-consumer的正确方法是

docker run --net=host --rm -i -t \ 
    confluentinc/cp-kafka:4.0.0 \
      kafka-console-consumer --bootstrap-server localhost:19092,localhost:29092,localhost:39092 --topic foo.bar

Notice there are --rm, -i, -t options. 注意,有--rm,-i,-t选项。 '-i and -t' are not needed if you passed --max-messages in which case the console will exit normally stopping and tearing down the container. 如果您通过了--max-messages,则不需要'-i和-t',在这种情况下,控制台通常会退出并停止并拆除容器。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM