简体   繁体   English

来自 Kafka 主题的丢失消息

[英]Lost message from the Kafka Topic

While trying timestamp in ProducerRecord ;ProducerRecord中尝试时间戳; I found something weird.我发现了一些奇怪的东西。 After sending few messages from the producer, I ran kafka-console-consumer.sh and verified that those messages are in the topic.从生产者那里发送了几条消息后,我运行了 kafka-console-consumer.sh 并验证了这些消息是否在主题中。 I stopped the producer and waited for a minute.我拦住了制片人,等了一分钟。 When I reran kafka-console-consumer.sh then it did not show the messages that I generated previously.当我重新运行 kafka-console-consumer.sh 时,它没有显示我之前生成的消息。 I also added producer.flush() and producer.close() but the outcome was still the same.我还添加了 producer.flush() 和 producer.close() 但结果仍然相同。

Now, when I stopped using timestamp field then everything worked fine which makes me believe that there is something finicky about messages with timestamp.现在,当我停止使用时间戳字段时,一切正常,这让我相信带有时间戳的消息有些挑剔。

I am using Kafka_2.11-2.0.0 (released on July 30, 2018)我正在使用 Kafka_2.11-2.0.0(2018 年 7 月 30 日发布)

Following is the sample code.以下是示例代码。

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.header.internal.RecordHeaders;
import org.apache.kafka.common.serialization.StringSerializer;
import java.util.Properties;
import static java.lang.Thread.sleep;
public class KafkaProducerSample{
    public static void main(String[] args){
        String kafkaHost="sample:port";
        String notificationTopic="test";

        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, this.kafkaHost);
        props.put(ProducerConfig.ACKS_CONFIG, 1);
        props.put(ProducerConfig.RETRIES_CONFIG, Integer.MAX_VALUE);

        Producer<String, String> producer = new KafkaProducer(props, new StringSerialize(), new StringSerializer);

        RecordHeaders recordHeaders = new RecordHeader();
        ProducerRecord<String, String> record = new ProducerRecord(notificationTopic, null, 1574443515L, sampleKey, SampleValue);
        producer.send(record);
        sleep(1000);
    }
}

I run console consumer as following我运行控制台消费者如下

$KAFKA_HOME/bin/kafka-console-consumer.sh --bootstrap.server KAFKA_HOST:PORT --topic test --from-beginning

#output after running producer
test


#output 5mins after shutting down producer

You are asynchronously sending only one record, but not ack-ing or flushing the buffer.您只异步发送一条记录,但不确认或刷新缓冲区。

You will need to send more records,您将需要发送更多记录,

or或者

producer.send(record).get();

or或者

producer.send(record);
producer.flush();

or (preferred), do Runtime.addShutdownHook() in your main method to flush and close the producer或(首选),在您的 main 方法中执行Runtime.addShutdownHook()以刷新和关闭生产者

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM