簡體   English   中英

Spring-Kafka 使用 DeadLetterPublishingRecoverer 將自定義記錄而不是失敗記錄發送到 DLT

[英]Spring-Kafka Sending custom record instead of failed record using DeadLetterPublishingRecoverer to a DLT

我正在使用 DeadLetterPublishingRecoverer 將失敗的記錄自動發送到 DLT。 我正在嘗試向 DLT 發送自定義記錄而不是失敗記錄。 是否有可能做到這一點。 請幫我配置一下。 我的 DeadLetterPublishingRecoverer 配置如下。

@Bean
DeadLetterPublishingRecoverer deadLetterPublishingRecoverer(KafkaTemplate<String, byte[]> byteArrayTemplate) {
    return new DeadLetterPublishingRecoverer([
            (byte[].class)                           : byteArrayTemplate],)

}

創建DeadLetterPublishingRecoverer的子類並覆蓋createProducerRecord()方法。

/**
 * Subclasses can override this method to customize the producer record to send to the
 * DLQ. The default implementation simply copies the key and value from the consumer
 * record and adds the headers. The timestamp is not set (the original timestamp is in
 * one of the headers). IMPORTANT: if the partition in the {@link TopicPartition} is
 * less than 0, it must be set to null in the {@link ProducerRecord}.
 * @param record the failed record
 * @param topicPartition the {@link TopicPartition} returned by the destination
 * resolver.
 * @param headers the headers - original record headers plus DLT headers.
 * @param data the value to use instead of the consumer record value.
 * @param isKey true if key deserialization failed.
 * @return the producer record to send.
 * @see KafkaHeaders
 */
protected ProducerRecord<Object, Object> createProducerRecord(ConsumerRecord<?, ?> record,

        TopicPartition topicPartition, Headers headers, @Nullable byte[] data, boolean isKey) {

在即將發布的 2.7 版本中,這更改為

/**
 * Subclasses can override this method to customize the producer record to send to the
 * DLQ. The default implementation simply copies the key and value from the consumer
 * record and adds the headers. The timestamp is not set (the original timestamp is in
 * one of the headers). IMPORTANT: if the partition in the {@link TopicPartition} is
 * less than 0, it must be set to null in the {@link ProducerRecord}.
 * @param record the failed record
 * @param topicPartition the {@link TopicPartition} returned by the destination
 * resolver.
 * @param headers the headers - original record headers plus DLT headers.
 * @param key the key to use instead of the consumer record key.
 * @param value the value to use instead of the consumer record value.
 * @return the producer record to send.
 * @see KafkaHeaders
 */
protected ProducerRecord<Object, Object> createProducerRecord(ConsumerRecord<?, ?> record,
        TopicPartition topicPartition, Headers headers, @Nullable byte[] key, @Nullable byte[] value) {

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM