简体   繁体   English

Kafka Consumer Coordinator 连接问题,Kafka 0.11.0.3

[英]Kafka Consumer Coordinator connection issues, Kafka 0.11.0.3

I can't seem to get my Java Kafka client to work.我似乎无法让我的 Java Kafka 客户端工作。 Symptoms:症状:

"Discovered coordinator" is seen in logs, then less than one second later, "Marking the coordinator... dead" is seen.在日志中看到“发现的协调员”,然后不到一秒钟后,看到了“标记协调员......死亡”。 No more output appears after that.之后不再出现 output。

Debugging the code shows that org.apache.kafka.clients.consumer.KafkaConsumer.poll() never returns.调试代码显示org.apache.kafka.clients.consumer.KafkaConsumer.poll()永远不会返回。 The code is stuck in this do-while loop in the ConsumerNetworkClient class:代码卡在ConsumerNetworkClient class 中的这个 do-while 循环中:

public boolean awaitMetadataUpdate(long timeout) {
    long startMs = this.time.milliseconds();
    int version = this.metadata.requestUpdate();

    do {
        this.poll(timeout);
    } while(this.metadata.version() == version && this.time.milliseconds() - startMs < timeout);

    return this.metadata.version() > version;
}

The logs say:日志说:

2019-09-25 15:25:45.268 [main]  INFO    org.apache.kafka.clients.consumer.ConsumerConfig    ConsumerConfig values: 
    auto.commit.interval.ms = 5000
    auto.offset.reset = latest
    bootstrap.servers = [localhost:9092]
    check.crcs = true
    client.id = 
    connections.max.idle.ms = 540000
    enable.auto.commit = true
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = foo
    heartbeat.interval.ms = 3000
    interceptor.classes = null
    internal.leave.group.on.close = true
    isolation.level = read_uncommitted
    key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 500
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 305000
    retry.backoff.ms = 100
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class com.mycompany.KafkaMessageJsonNodeDeserializer

2019-09-25 15:25:45.312 [main]  INFO    org.apache.kafka.common.utils.AppInfoParser -   Kafka version : 0.11.0.3
2019-09-25 15:25:45.312 [main]  INFO    org.apache.kafka.common.utils.AppInfoParser -   Kafka commitId : 26ddb9e3197be39a
2019-09-25 15:25:47.700 [pool-2-thread-1]   INFO    org.apache.kafka.clients.consumer.internals.AbstractCoordinator -   Discovered coordinator ad0c03f60f39:9092 (id: 2147483647 rack: null) for group foo.
2019-09-25 15:25:47.705 [pool-2-thread-1]   INFO    org.apache.kafka.clients.consumer.internals.AbstractCoordinator -   Marking the coordinator ad0c03f60f39:9092 (id: 2147483647 rack: null) dead for group foo

.. if debug was turned on, then logs would also have a message like: ..如果打开了调试,那么日志也会有如下消息:

Coordinator discovery failed for group foo, refreshing metadata

More details:更多细节:

I'm running kafka inside a docker container.我在 docker 容器内运行 kafka。 When running the console consumer within the docker container, all is well.在 docker 容器中运行控制台使用者时,一切正常。 Messages are received just fine by the console consumer.控制台消费者可以很好地接收消息。 My app (where the issues occur) is running outside the docker container.我的应用程序(出现问题的地方)在 docker 容器之外运行。

The docker run command includes -p 2181:2181 -p 9001:9001 -p 9092:9092 . docker run命令包括-p 2181:2181 -p 9001:9001 -p 9092:9092

The stack looks like this when the Kafka client gets stuck in the loop:当 Kafka 客户端卡在循环中时,堆栈如下所示:

awaitMetadataUpdate:134, ConsumerNetworkClient (org.apache.kafka.clients.consumer.internals)
ensureCoordinatorReady:226, AbstractCoordinator (org.apache.kafka.clients.consumer.internals)
ensureCoordinatorReady:203, AbstractCoordinator (org.apache.kafka.clients.consumer.internals)
poll:286, ConsumerCoordinator (org.apache.kafka.clients.consumer.internals)
pollOnce:1078, KafkaConsumer (org.apache.kafka.clients.consumer)
poll:1043, KafkaConsumer (org.apache.kafka.clients.consumer)

Looks like, your broker is advertising itself as ad0c03f60f39.看起来,您的经纪人将自己宣传为 ad0c03f60f39。 And you seem to be running the client from your host machine, which can not resolve ad0c03f60f39 for obvious reason.而且您似乎正在从您的主机运行客户端,由于显而易见的原因,它无法解析 ad0c03f60f39。 You need to configure the broker to advertise itself as somthing which is resolvable from the host.您需要配置代理以将自己宣传为可从主机解析的东西。 Look for "advertised.listeners" in server.properties, you can set something like PLAINTEXT://localhost:9092在 server.properties 中查找“advertised.listeners”,您可以设置类似 PLAINTEXT://localhost:9092 的内容

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM