简体   繁体   English

无法连接到Spotify kafka容器,基本连接问题

[英]can't connect to spotify kafka container, basic connection problems

stumbling at the basics with docker and kafka, cant get a client connection在使用 docker 和 kafka 的基础上磕磕绊绊,无法获得客户端连接

what i've done so far到目前为止我做了什么

1) installed docker windows on windows 10. 2) open kitematic, and searched for kafka, and selected the spotify kafka (wurstmeister image failed to start). 1)在windows 10上安装docker windows。2)打开kitematic,搜索kafka,选择spotify kafka(wurstmeister镜像启动失败)。
3) container fires up and i can see the image running in the container logs. 3) 容器启动,我可以看到在容器日志中运行的图像。
4) ip and ports reports docker port 9092 - and access port as localhost:32768 4) ip 和端口报告 docker 端口 9092 - 访问端口为 localhost:32768

docker ps shows this 7bf9f9278e64 spotify/kafka:latest "supervisord -n" 2 hours ago Up 57 minutes 0.0.0.0:32769->2181/tcp, 0.0.0.0:32768->9092/tcp kafka docker ps 显示这个 7bf9f9278e64 spotify/kafka:latest "supervisord -n" 2 hours ago Up 57 minutes 0.0.0.0:32769->2181/tcp, 0.0.0.0:32768->9092/tcp kafka

docker-machine active, returns no active host docker-machine 活动,返回没有活动的主机

my groovy class (kind of cut and paste from an example setsup the connection like this我的常规课程(从示例中剪切和粘贴的那种设置了这样的连接

class KafkaProducer {

    String topicName = "wills topic"
    Producer<String, String> producer    
def init () {
    Properties props = new Properties()
    props.put("bootstrap.servers", "192.168.1.89:32768" )   //Assign localhost id and external port (9092 int)
    props.put("acks", "all")                            //Set acknowledgements for producer requests.
    props.put("retries", 0)                             //If the request fails, the producer can automatically retry,
    props.put("batch.size", 16384)                      //Specify buffer size in config
    props.put("linger.ms", 1)                           //Reduce the no of requests less than 0
    props.put("buffer.memory", 33554432)                //The buffer.memory controls the total amount of memory available to the producer for buffering.
    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
    props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
    props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")

    producer = new org.apache.kafka.clients.producer.KafkaProducer<String, String>(props)
}  ....

when i run this init i get errors say it cant resolve connection, for java.io.IOException: Can't resolve address: 7bf9f9278e64:9092, which is the internal container port.当我运行这个 init 时,我收到错误说它无法解析连接,对于 java.io.IOException: Can't resolve address: 7bf9f9278e64:9092,这是内部容器端口。 (my script is calling from my normal IDE desktop environment ) (我的脚本是从我的普通 IDE 桌面环境调用的)

kitmatic says says this is the mapping. kitmatic 说这是映射。 so why cant i connect and then send ?那么为什么我不能连接然后发送? Also as i just download via kitematic where does one put the docker-compose.yml if you want to change the config.另外,因为我只是通过 kitematic 下载,如果您想更改配置,将 docker-compose.yml 放在哪里。 Really not clear where one do do this.真的不清楚在哪里这样做。

18:05:41.022 [main] INFO  o.a.k.c.p.ProducerConfig:[.logAll:] > ProducerConfig values: 
    acks = all
    batch.size = 16384
    block.on.buffer.full = false
    bootstrap.servers = [192.168.1.89:32768]
    buffer.memory = 33554432
    client.id = 
    compression.type = none
    connections.max.idle.ms = 540000
    interceptor.classes = null
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 1
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.fetch.timeout.ms = 60000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 0
    retry.backoff.ms = 100
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    timeout.ms = 30000
    value.serializer = class org.apache.kafka.common.serialization.StringSerializer

18:05:41.076 [main] INFO  o.a.k.c.p.ProducerConfig:[.logAll:] > ProducerConfig values: 
    acks = all
    batch.size = 16384
    block.on.buffer.full = false
    bootstrap.servers = [192.168.1.89:32768]
    buffer.memory = 33554432
    client.id = producer-1
    compression.type = none
    connections.max.idle.ms = 540000
    interceptor.classes = null
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 1
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.fetch.timeout.ms = 60000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 0
    retry.backoff.ms = 100
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    timeout.ms = 30000
    value.serializer = class org.apache.kafka.common.serialization.StringSerializer

18:05:41.079 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name bufferpool-wait-time
18:05:41.083 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name buffer-exhausted-records
18:05:41.085 [main] DEBUG o.a.k.c.Metadata:[.update:] > Updated cluster metadata version 1 to Cluster(id = null, nodes = [192.168.1.89:32768 (id: -1 rack: null)], partitions = [])
18:05:41.401 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name connections-closed:
18:05:41.401 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name connections-created:
18:05:41.402 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name bytes-sent-received:
18:05:41.402 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name bytes-sent:
18:05:41.406 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name bytes-received:
18:05:41.406 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name select-time:
18:05:41.407 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name io-time:
18:05:41.409 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name batch-size
18:05:41.410 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name compression-rate
18:05:41.410 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name queue-time
18:05:41.410 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name request-time
18:05:41.410 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name produce-throttle-time
18:05:41.411 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name records-per-request
18:05:41.412 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name record-retries
18:05:41.412 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name errors
18:05:41.412 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name record-size-max
18:05:41.414 [main] WARN  o.a.k.c.p.ProducerConfig:[.logUnused:] > The configuration 'key.deserializer' was supplied but isn't a known config.
18:05:41.414 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.p.i.Sender:[.run:] > Starting Kafka producer I/O thread.
18:05:41.414 [main] WARN  o.a.k.c.p.ProducerConfig:[.logUnused:] > The configuration 'value.deserializer' was supplied but isn't a known config.
18:05:41.416 [main] INFO  o.a.k.c.u.AppInfoParser:[.<init>:] > Kafka version : 0.10.1.1
18:05:41.416 [main] INFO  o.a.k.c.u.AppInfoParser:[.<init>:] > Kafka commitId : f10ef2720b03b247
18:05:41.417 [main] DEBUG o.a.k.c.p.KafkaProducer:[.<init>:] > Kafka producer started
18:05:41.430 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.NetworkClient:[.maybeUpdate:] > Initialize connection to node -1 for sending metadata request
18:05:41.430 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.NetworkClient:[.initiateConnect:] > Initiating connection to node -1 at 192.168.1.89:32768.
18:05:41.434 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name node--1.bytes-sent
18:05:41.434 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name node--1.bytes-received
18:05:41.435 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name node--1.latency
18:05:41.435 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.n.Selector:[.pollSelectionKeys:] > Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
18:05:41.436 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.NetworkClient:[.handleConnections:] > Completed connection to node -1
18:05:41.452 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.NetworkClient:[.maybeUpdate:] > Sending metadata request {topics=[wills topic]} to node -1
18:05:41.476 [kafka-producer-network-thread | producer-1] WARN  o.a.k.c.NetworkClient:[.handleResponse:] > Error while fetching metadata with correlation id 0 : {wills topic=INVALID_TOPIC_EXCEPTION}
18:05:41.477 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.Metadata:[.update:] > Updated cluster metadata version 2 to Cluster(id = 8cjV2Ga6RB6bXfeDWWfTKA, nodes = [7bf9f9278e64:9092 (id: 0 rack: null)], partitions = [])
18:05:41.570 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.NetworkClient:[.maybeUpdate:] > Initialize connection to node 0 for sending metadata request
18:05:41.570 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.NetworkClient:[.initiateConnect:] > Initiating connection to node 0 at 7bf9f9278e64:9092.
18:05:43.826 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.NetworkClient:[.initiateConnect:] > Error connecting to node 0 at 7bf9f9278e64:9092:
java.io.IOException: Can't resolve address: 7bf9f9278e64:9092
    at org.apache.kafka.common.network.Selector.connect(Selector.java:180)
    at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:498)
    at org.apache.kafka.clients.NetworkClient.access$400(NetworkClient.java:48)
    at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:645)
    at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:552)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:258)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:236)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135)
    at java.lang.Thread.run(Thread.java:745)

appreciate help to get me over this first hurdle感谢帮助我克服第一个障碍

Try to set --env ADVERTISED_HOST=192.168.1.89 and --env ADVERTISED_PORT=32768 when starting the container.尝试在启动容器时设置--env ADVERTISED_HOST=192.168.1.89--env ADVERTISED_PORT=32768 This is required because by default Kafka advertises the local host name (which is the container hostname, eg 7bf9f9278e64 ) and this is not accessible from the host.这是必需的,因为默认情况下 Kafka 会公布本地主机名(即容器主机名,例如7bf9f9278e64 ),并且无法从主机访问。 As you are using port binding you need to advertise your host IP (eg 192.168.1.89 ) and the mapped port (eg 32768 ).当您使用端口绑定时,您需要公布您的主机 IP(例如192.168.1.89 )和映射端口(例如32768 )。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM