繁体   English   中英

卡夫卡消费者收到一条消息之前

[英]kafka consumer get a message which was consumed before

我在XD中有一个消费者工作,一旦收到其他生产者工作产生的消息,该工作便会完成。 我每天都会触发这些工作。 我发现有时该消费者收到一条消息,该消息以前曾被使用过。

记录如下:

####OK
2019-06-28 02:06:13+0800 INFO inbound.job:Consumer_Job_In_XD-redis:queue-inbound-channel-adapter1 myConsumer.ConsumeTasklet - ==========consumed poll data ConsumerRecord(topic = my_consumer_topic, partition = 0, leaderEpoch = 0, offset = 4, CreateTime = 1561658772877, serialized key size = -1, serialized value size = 30, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = message_from_producer) ==================
2019-06-28 02:06:13+0800 INFO inbound.job:Consumer_Job_In_XD-redis:queue-inbound-channel-adapter1 myConsumer.ConsumeTasklet - ==========message is  message_from_producer, task startTime is 1561658700108, timestamp is 1561658772877 ==================

####NG
2019-06-29 17:07:14+0800 INFO inbound.job:Consumer_Job_In_XD-redis:queue-inbound-channel-adapter1 myConsumer.ConsumeTasklet - ==========consumed poll data ConsumerRecord(topic = my_consumer_topic, partition = 0, leaderEpoch = 0, offset = 0, CreateTime = 1561399136840, serialized key size = -1, serialized value size = 30, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = message_from_producer) ==================
2019-06-29 17:07:14+0800 INFO inbound.job:Consumer_Job_In_XD-redis:queue-inbound-channel-adapter1 myConsumer.ConsumeTasklet - ==========message is  message_from_producer, task startTime is 1561799100282, timestamp is 1561399136840 ==================

####OK
2019-06-29 22:16:58+0800 INFO inbound.job:Consumer_Job_In_XD-redis:queue-inbound-channel-adapter1 myConsumer.ConsumeTasklet - ==========consumed poll data ConsumerRecord(topic = my_consumer_topic, partition = 0, leaderEpoch = 2, offset = 5, CreateTime = 1561817817702, serialized key size = -1, serialized value size = 30, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = message_from_producer) ==================
2019-06-29 22:16:58+0800 INFO inbound.job:Consumer_Job_In_XD-redis:queue-inbound-channel-adapter1 myConsumer.ConsumeTasklet - ==========message is  message_from_producer, task startTime is 1561817528447, timestamp is 1561817817702 ==================

####NG
2019-07-02 02:05:09+0800 INFO inbound.job:Consumer_Job_In_XD-redis:queue-inbound-channel-adapter1 myConsumer.ConsumeTasklet - ==========consumed poll data ConsumerRecord(topic = my_consumer_topic, partition = 0, leaderEpoch = 0, offset = 0, CreateTime = 1561399136840, serialized key size = -1, serialized value size = 30, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = message_from_producer) ==================
2019-07-02 02:05:09+0800 INFO inbound.job:Consumer_Job_In_XD-redis:queue-inbound-channel-adapter1 myConsumer.ConsumeTasklet - ==========message is  message_from_producer, task startTime is 1562004300372, timestamp is 1561399136840 ==================

看起来它多次获得了offset = 0消息。

Kakfa版本(1.0.0)

使用者手动提交偏移量。(consumer.commitSync();)
仅设置以下属性:

bootstrap.servers  
auto.offset.reset=earliest  
group.id  
client.id  
    Properties config = new Properties();
    config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    config.put("auto.offset.reset", "earliest");
    config.put("group.id", group);
    config.put("client.id", config.getProperty("group.id") + "_" + System.currentTimeMillis());
    config.put("enable.auto.commit", false);
    try {
        consumer = new KafkaConsumer<>(config);
        consumer.subscribe(tList);
        while (true) {
            ConsumerRecords<?, ?> records = consumer.poll(10000);
            for (ConsumerRecord<?, ?> record : records) {
                //.........
                consumer.commitSync();
            }
            if (matched)
                break;
        }
    } finally {
        consumer.close();
    }

在Kafka 1.1中,默认情况下,偏移量仅保留24小时,因为offsets.retention.minutes设置为1440。

因此,如果您停止使用消费者超过24小时,则在重新启动时,可能会删除已提交的偏移量,从而迫使消费者使用auto.offset.reset来查找新位置。

由于对于许多人来说太短了,从Kafka 2.0开始, offsets.retention.minutes现在设置为10080(7天)。

您应该更改代理配置,以允许将偏移量保留更长的时间,或者更新到更新的Kafka版本。

尝试设置auto.offset.reset = latest,以这种方式,在重新启动后,使用者将在最近的提交偏移量之后开始消耗。

更多信息在这里https://kafka.apache.org/documentation/#consumerconfigs

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM