简体   繁体   中英

Docker Kafka Container Consumer Does Not Consume Data

I am new at Docker and also Apache Kafka. What I am trying to do is creating a consumer and producer class in java. I set up spotify/kafka which is a kafka container for Docker. But something went wrong.

I could not find any producer consumer example (If you have one please share) for a docker kafka container, so I just tried to do like it is a normal kafka(I mean not as a docker container, I guess there is no difference for usages). I tried this code here (And I also tryed to reach this guy to ask but could not achieve so I am asking for help here): But when I write something to the producer terminal, nothing appeares in producer terminal. My OS is Ubuntu Xenial 16.04. Here is what I did:

I started docker kafka container by typing this:

docker run -it spotify/kafka

And at the end of the output I got this message, so it starts correctly I guess:

2018-02-25 09:27:16,911 INFO success: kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

Consumer class:

public class Consumer {
private static Scanner in;

public static void main(String[] argv)throws Exception{
    if (argv.length != 2) {
        System.err.printf("Usage: %s <topicName> <groupId>\n",
                Consumer.class.getSimpleName());
        System.exit(-1);
    }
    in = new Scanner(System.in);
    String topicName = argv[0];
    String groupId = argv[1];

    ConsumerThread consumerRunnable = new ConsumerThread(topicName,groupId);
    consumerRunnable.start();

    String line = "";
    while (!line.equals("exit")) {

        line = in.next();
    }
    consumerRunnable.getKafkaConsumer().wakeup();
    System.out.println("Stopping consumer .....");
    consumerRunnable.join();
}

private static class ConsumerThread extends Thread{
    private String topicName;
    private String groupId;
    private KafkaConsumer<String,String> kafkaConsumer;

    public ConsumerThread(String topicName, String groupId){
        this.topicName = topicName;
        this.groupId = groupId;
    }
    public void run() {
        Properties configProperties = new Properties();
        configProperties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        configProperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArrayDeserializer");
        configProperties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        configProperties.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
        configProperties.put(ConsumerConfig.CLIENT_ID_CONFIG, "simple");

        //Figure out where to start processing messages from
        kafkaConsumer = new KafkaConsumer<String, String>(configProperties);
        kafkaConsumer.subscribe(Arrays.asList(topicName));
        //Start processing messages
        try {
            while (1) {
                ConsumerRecords<String, String> records = kafkaConsumer.poll(100);

        System.out.println(records.toString() +"geldi");
                for (ConsumerRecord<String, String> record : records)
                    System.out.println(record.value());
            }
        }catch(WakeupException ex){
            System.out.println("Exception caught " + ex.getMessage());
        }finally{
            kafkaConsumer.close();
            System.out.println("After closing KafkaConsumer");
        }
    }
    public KafkaConsumer<String,String> getKafkaConsumer(){
       return this.kafkaConsumer;
    }
}
}

Producer Class:

public class Producer {
private static Scanner in;
public static void main(String[] argv)throws Exception {
    if (argv.length != 1) {
        System.err.println("Please specify 1 parameters ");
        System.exit(-1);
    }
    String topicName = argv[0];
    in = new Scanner(System.in);
    System.out.println("Enter message(type exit to quit)");

    //Configure the Producer
    Properties configProperties = new Properties();
    configProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"localhost:9092");
    configProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.ByteArraySerializer");
    configProperties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");

    org.apache.kafka.clients.producer.Producer producer = new KafkaProducer(configProperties);
    String line = in.nextLine();
    while(!line.equals("exit")) {
        //TODO: Make sure to use the ProducerRecord constructor that does not take parition Id
        ProducerRecord<String, String> rec = new ProducerRecord<String, String>(topicName,line);
        producer.send(rec);
        line = in.nextLine();
    }
    in.close();
    producer.close();
}
}

After running both classes in different terminals with typing:

mvn clean compile assembly:single
java -cp (fat jar path) .../Consumer test(topic name) group1
java -cp (fat jar path) .../Producer test(topic name)

When I type something in producer terminal, nothing appears in consumer. Notice that I did not install zookeeper because s potify/kafka includes zookeeper. And I did not create any topic or group before doing these steps. These are the only thing that I did. I could not find how to do that. How can I solve this?

Edit: I added consumer and producer config values, can anybody knows any mistake?

Consumer Config:

    metric.reporters = []
metadata.max.age.ms = 300000
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
group.id = gr1
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [localhost:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
enable.auto.commit = true
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
session.timeout.ms = 30000
metrics.num.samples = 2
client.id = simple
ssl.endpoint.identification.algorithm = null
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
ssl.protocol = TLS
check.crcs = true
request.timeout.ms = 40000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
fetch.min.bytes = 1024
send.buffer.bytes = 131072
auto.offset.reset = latest

2018-02-25 16:23:37 INFO  AppInfoParser:82 - Kafka version : 0.9.0.0
2018-02-25 16:23:37 INFO  AppInfoParser:83 - Kafka commitId :     fc7243c2af4b2b4a

Producer Config:

compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 60000
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
client.id = 
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
request.timeout.ms = 30000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = 1
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
linger.ms = 0

2018-02-25 16:24:16 INFO  AppInfoParser:82 - Kafka version : 0.9.0.0
2018-02-25 16:24:16 INFO  AppInfoParser:83 - Kafka commitId : fc7243c2af4b2b4a

After a long search, I found the problem. When I run the ches/kafka which is the docker kafka container that I use, I did not specified a port number.

docker run -d -p 2181:2181 --name zookeeper jplock/zookeeper
docker run -d -p 9092 --name kafka --link zookeeper:zookeeper ches/kafka

This is how I run zookeeper and kafka container now. After specifying port number, It did not work again. Because the container is a isolated process actually. This means container thinks that it has all the hardware. But it is not actually. Specifying port number as 9092 does not gives the 9092 port to the container.

At the background Operating System matches 9092 with a physical, suitable port. Which whe can see with docker ps .

Docker ps 输出

In above picture you can see that 0.0.0.0:32769->9092/tcp that means container uses 32769 port actually. So after changing the port number as 32769 in the code it worked well. Hope it helps somebody.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM