[英]Consumer not receiving message in Apache Kafka
I am building an Apache Kafka consumer to subscribe to another already running Kafka. 我正在构建一个Apache Kafka消费者来订阅另一个已经运行的Kafka。 Now, my problem is that when my producer pushes message to a server...my consumer does not receive them.
现在,我的问题是当我的制作人将消息推送到服务器时...我的消费者没有收到它们。 Here I give Producer code,
在这里我给出了生产者代码,
Properties properties = new Properties();
properties.put("metadata.broker.list","Running kafka ip addr:9092");
properties.put("serializer.class","kafka.serializer.StringEncoder");
ProducerConfig producerConfig = new ProducerConfig(properties);
kafka.javaapi.producer.Producer<String,String> producer = new kafka.javaapi.producer.Producer<String, String>(producerConfig);
String filePath="filepath";
File rootFile= new File(filePath);
Collection<File> allFiles = FileUtils.listFiles(rootFile, CanReadFileFilter.CAN_READ, TrueFileFilter.INSTANCE);
for(File file : allFiles) {
StringBuilder sb = new StringBuilder();
sb.append(file);
KeyedMessage<String, String> message =new KeyedMessage<String, String>(TOPIC,sb.toString());
System.out.println("sending msg from producer.."+sb.toString());
producer.send(message);
}
producer.close();
Here Consumer code, 消费者代码,
properties.put("bootstrap.servers","Running zookeaper ip addr:2181");
properties.put("group.id","test-group");
properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.put("enable.auto.commit", "false");
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(properties);
consumer.subscribe(Collections.singletonList(topicName));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records)
{
System.out.println("topic = "+record.topic());
System.out.println("topic = "+record.partition());
System.out.println("topic = "+record.offset());
}
try {
consumer.commitSync();
} catch (CommitFailedException e) {
System.out.printf("commit failed", e) ;
}
}
I use this dependency: 我使用这种依赖:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.10.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.1.0</version>
</dependency>
I get all information from that link: 我从该链接获取所有信息:
https://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html https://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html
When we running consumer, we didn't get any notification from the consumer side. 当我们运行消费者时,我们没有收到消费者方面的任何通知。 Please give me any idea.
请给我任何想法。
For producer: 对于制片人:
properties.put("metadata.broker.list","Running kafka ip addr:9092");
I guess, this should be "bootstrap.servers". 我猜,这应该是“bootstrap.servers”。
For consumer: 对于消费者:
properties.put("bootstrap.servers","Running zookeaper ip addr:2181");
bootstrap.servers
must point to a broker, not to ZK. bootstrap.servers
必须指向代理,而不是ZK。
The "problem" is, that the consumer will just wait for a broker but not fail if there is no broker at the specified host/port. “问题”是,如果指定的主机/端口没有代理,则消费者将只等待代理但不会失败。
I'm a newb at Kafka and Java, but i'll like to suggest the following approach 我是Kafka和Java的新手,但我想建议以下方法
/usr/bin/kafka-avro-console-consumer --new-consumer --bootstrap-server localhost:9092 --topic KumarTopic --from-beginning
. /usr/bin/kafka-avro-console-consumer --new-consumer --bootstrap-server localhost:9092 --topic KumarTopic --from-beginning
。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.