[英]Replay messages from dead letter queue in Spring Cloud Stream with Kafka binder
We are using Spring Cloud Stream with Confluent Schema Registry , Avro and Kafka binder. 我们正在将Spring Cloud Stream与Confluent Schema Registry , Avro和Kafka活页夹结合使用。 We have configured all our services in the data processing pipeline to use a shared DLQ Kafka topic to simplify the process of exception handling and be able to replay failed messages.
我们已将数据处理管道中的所有服务配置为使用共享的DLQ Kafka主题,以简化异常处理过程并能够重播失败的消息。 However, it looks like that for some reason we are not able to properly extract payload messages as messages with different schemas are published to a single dlq.
但是,由于某些原因,由于具有不同架构的消息被发布到单个dlq,因此我们似乎无法正确提取有效负载消息。 Hence, we are losing the track of schema of the original message.
因此,我们失去了原始消息的架构跟踪。
I was wondering if there is any way we could maintain the original schema_id
of the failed messages in dlq so that it can be used for the purpose of seamless replay. 我想知道是否有什么方法可以在
schema_id
中维护失败消息的原始schema_id
,以便可以将其用于无缝重播。
It turns out by changing the Subject Naming Strategy to be RecordNameStrategy this can be achieved and regardless of the topic name, a record maintains the original schema across all the topics. 事实证明,通过将主题命名策略更改为RecordNameStrategy可以实现,并且无论主题名称如何,记录都将保留所有主题的原始架构。 More details can be found here .
可以在此处找到更多详细信息。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.