简体   繁体   English

Spring Cloud Stream kafka 时间戳

[英]Spring cloud stream kafka timestamp

I am using spring cloud streams for consuming kafka.我正在使用 spring cloud streams 来消费 kafka。 I need to get the time the event has been published to the queue from the publisher(publisher time not the broker ingestion time)我需要从发布者那里获取事件发布到队列的时间(发布者时间而不是经纪人摄取时间)

I can see below information in headers:我可以在标题中看到以下信息:

kafka_timestamptype=createtime,
kafka_receivedTopic=Topic_Name,
kafka_receivedTimestamp= 1563108979621,
timetstamp= 1563108984514

I am really confused on above two timestamps.我对以上两个时间戳感到很困惑。 I did see in some sources saying that create time timestamp means that its the publishing timestamp from producer.But what represents the publishing time, is it kafka_ receivedTimestamp or just timestamp?我确实在一些消息来源中看到创建时间戳意味着它是生产者的发布时间戳。但是什么代表发布时间,是 kafka_receivedTimestamp 还是只是时间戳?

When using spring cloud streams I noticed that spring message headers has timestamp in springs control.使用 spring 云流时,我注意到 spring 消息标头在 spring 控件中有时间戳。 Does that mean kafka_receivedTimestamp is the publishing time of the record.那是不是说kafka_receivedTimestamp就是记录的发布时间。

Documention just says that, “The header for holding the timestamp of the consumer record. Documention 只是说,“保存消费者记录时间戳的标头。 “. “。 It doesn't clarify if its the consuminng time or publishing time or broker ingestion time.它没有说明它是消费时间还是发布时间还是经纪人摄取时间。

https://docs.spring.io/spring-kafka/api/org/springframework/kafka/support/KafkaHeaders.html#RECEIVED_TIMESTAMP https://docs.spring.io/spring-kafka/api/org/springframework/kafka/support/KafkaHeaders.html#RECEIVED_TIMESTAMP

Could anyone suggest what does these two timestamps mean based on the timestamp type?谁能根据时间戳类型建议这两个时间戳的含义?

kafka_timestamp is for when you want to set a custom timestamp on an outbound record. kafka_timestamp适用于您想要在出站记录上设置自定义时间戳的情况。

kafka_receivedTimestamp is populated from the incoming ConsumerRecord (which was set when the record was published). kafka_receivedTimestamp由传入的ConsumerRecord填充(在记录发布时设置)。

A different header is used during header mapping to prevent inadvertent header propagation when sending a message which originated as an incoming message.在标头映射期间使用不同的标头,以防止在发送源自传入消息的消息时无意的标头传播。

receive -> message -> processes -> send

If we used the same header, it would be possible for an application to set the same timestamp on the outbound message which most likely would not be incorrect.如果我们使用相同的标头,应用程序可能会在出站消息上设置相同的时间戳,这很可能不会不正确。 If that is actually what you want to do, then copy the received timestamp to the timestamp.如果这确实是你想要做的,那么将接收到的时间戳复制到时间戳中。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Spring Cloud kafka流消费者重试机制 - Spring cloud kafka stream consumer retry mechanism Spring Cloud Kafka流的连接错误 - connection error with spring cloud kafka stream Spring 云 Stream kafka-MessageConversionException - Spring Cloud Stream kafka- MessageConversionException Spring 云 Kafka Stream - 不同集群中的死信主题 - Spring cloud Kafka Stream - Dead Letter Topic in Different Cluster 使用 spring-cloud-stream-kafka-binder 生成的 spring-kafka 消费一条 avro 消息 - Consumer a avro message using spring-kafka produced by spring-cloud-stream-kafka-binder Spring Cloud Stream Producer在使用Spring Kafka Consumer时添加了“垃圾”字符 - Spring Cloud Stream Producer adds “junk” characters when using Spring Kafka Consumer 使用消费者/生产者API的Kafka的Spring Cloud Stream恰好一次具有transaction-id-prefix的语义不能按预期工作 - Spring Cloud Stream for Kafka with consumer/producer API exactly once semantics with transaction-id-prefix is not working as expected Spring Cloud Stream (Kafka) 参数化指定错误通道 {destination}.{group}.errors - Spring Cloud Stream (Kafka) parameterize specified error channel {destination}.{group}.errors Spring Cloud Stream Kafka消费者/生产者API恰好一次语义(事务性) - Spring Cloud Stream Kafka consumer/producer API exactly once semantics (transactional) 我们正在为Kafka使用Spring Cloud Stream,我们正在寻找具有消费者API的Exactly Once Semantics - We are using Spring Cloud Stream for Kafka and we are looking for Exactly Once Semantics with consumer API
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM