简体   繁体   English

在Openshift上使用Strimzi运算符的Apache Kafka-无法连接

[英]Apache Kafka with Strimzi operator on Openshift - cannot connect

I've been following this tutorial step by step to set up Kafka on Openshift using the strmizi operator: 我一直在逐步遵循本教程,使用strmizi运算符在Openshift上设置Kafka:

https://developers.redhat.com/blog/2018/10/29/how-to-run-kafka-on-openshift-the-enterprise-kubernetes-with-amq-streams/ https://developers.redhat.com/blog/2018/10/29/how-to-run-kafka-on-openshift-the-enterprise-kubernetes-with-amq-streams/

but instead of the sample application I prepared my own, very simple Kafka producer. 但是我准备了自己的非常简单的Kafka生产者,而不是示例应用程序。 Here is the code: 这是代码:

@RestController
@RequestMapping("/kafka")
public class KafkaController {

    @GetMapping
    public void ok(){
        final Properties props = new Properties();
        props.put("bootstrap.servers", "my-cluster-kafka-bootstrap-kafka-test.ocapp-pg.domain.com:443");
        props.put("acks", "all");
        props.put("retries", 0);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        props.put("security.protocol", "SSL");
        props.put("ssl.keystore.location", "src/main/resources/keystore.jks");
        props.put("ssl.keystore.password", "password");
        props.put("ssl.truststore.location", "src/main/resources/keystore.jks");
        props.put("ssl.truststore.password", "password");

        try (final Producer<String, String> producer = new KafkaProducer<>(props)) {
            while (true) {
                final String date = new Date().toString();
                System.out.println("Sending message: " + date);
                producer.send(new ProducerRecord<>("tag-topic", "date", date));
                Thread.sleep(2000);
            }
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
}

When trying to send messages to kafka this is what I get in the logs: 当尝试向kafka发送消息时,这是我在日志中看到的内容:

2019-05-16 19:55:13.960 DEBUG 21476 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Initiating connection to node my-cluster-kafka-2-kafka-test.ocapp-pg.domain.com:443 (id: 2 rack: )
2019-05-16 19:55:14.037 DEBUG 21476 --- [ad | producer-1] o.apache.kafka.common.network.Selector   : [Producer clientId=producer-1] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2
2019-05-16 19:55:14.038 DEBUG 21476 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Completed connection to node 2. Fetching API versions.
2019-05-16 19:55:14.111 DEBUG 21476 --- [ad | producer-1] o.apache.kafka.common.network.Selector   : [Producer clientId=producer-1] Connection with my-cluster-kafka-2-kafka-test.ocapp-pg.domain.com/52.215.40.40 disconnected

java.io.EOFException: EOF during handshake, handshake status is NEED_UNWRAP
    at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:489) ~[kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:337) ~[kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:264) ~[kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:125) ~[kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:489) [kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.common.network.Selector.poll(Selector.java:427) [kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510) [kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239) [kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163) [kafka-clients-2.0.1.jar:na]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_201]

2019-05-16 19:55:14.112 DEBUG 21476 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Node 2 disconnected.
2019-05-16 19:55:14.112  WARN 21476 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Connection to node 2 terminated during authentication. This may indicate that authentication failed due to invalid credentials.
2019-05-16 19:55:14.112 DEBUG 21476 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Give up sending metadata request since no node is available
2019-05-16 19:55:14.162 DEBUG 21476 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Give up sending metadata request since no node is available

Seems like something with the truststore maybe? 似乎与信任库类似? But I download the cacert and import it into the trust store just like in the blog post. 但是我下载了cacert并将其导入到信任存储中,就像在博客文章中一样。 I even tried to copyin the cert manually. 我什至尝试手动复制证书。 Still the same. 还是一样。 What am I doing wrong here? 我在这里做错了什么?

I encountered the same error when my service was wrongly configured and didn't select any pod. 当我的服务配置错误且未选择任何Pod时,我遇到了相同的错误。 Check if your service is listing any pods. 检查您的服务是否列出了任何吊舱。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM