简体   繁体   English

实现 Spring 服务,根据配置向不同的 Kafka 主题发送消息

[英]Implement Spring Service to send message to different Kafka topics based on configuration

I would like to use Spring Services in order to send data to different Kafka messages based on configuration:我想使用 Spring 服务以便根据配置将数据发送到不同的 Kafka 消息:

ResponseFactory processingPeply = null;

        switch(endpointType)
        {
            case "email":
                ProducerRecord<String, Object> record = new ProducerRecord<>("tp-email.request", tf);
                RequestReplyFuture<String, Object, Object> replyFuture = processingTransactionEmailReplyKafkaTemplate.sendAndReceive(record);
                SendResult<String, Object> sendResult = replyFuture.getSendFuture().get(10, TimeUnit.SECONDS);
                ConsumerRecord<String, Object> consumerRecord = replyFuture.get(10, TimeUnit.SECONDS);

                processingPeply = (ResponseFactory) consumerRecord.value();
              break;
            case "sms":
                ProducerRecord<String, Object> record = new ProducerRecord<>("tp-sms.request", tf);
                RequestReplyFuture<String, Object, Object> replyFuture = processingTransactionSmsReplyKafkaTemplate.sendAndReceive(record);
                SendResult<String, Object> sendResult = replyFuture.getSendFuture().get(10, TimeUnit.SECONDS);
                ConsumerRecord<String, Object> consumerRecord = replyFuture.get(10, TimeUnit.SECONDS);

                processingPeply = (ResponseFactory) consumerRecord.value();
              break;
            case "network":
                ProducerRecord<String, Object> record = new ProducerRecord<>("tp-network.request", tf);
                RequestReplyFuture<String, Object, Object> replyFuture = processingTransactionNetworkReplyKafkaTemplate.sendAndReceive(record);
                SendResult<String, Object> sendResult = replyFuture.getSendFuture().get(10, TimeUnit.SECONDS);
                ConsumerRecord<String, Object> consumerRecord = replyFuture.get(10, TimeUnit.SECONDS);

                processingPeply = (ResponseFactory) consumerRecord.value();
              break;
              
            default:
                processingPeply = ResponseFactory.builder().status("error").build();
        } 

I currently get:我目前得到:

  • Variable 'record' is already defined in the scope变量“记录”已在 scope 中定义
  • Variable 'sendResult' is already defined in the scope变量“sendResult”已在 scope 中定义
  • Variable 'consumerRecord' is already defined in the scope变量“consumerRecord”已在 scope 中定义

Do you know how I can redesign the code in some better way so I can solve the issue?您知道如何以更好的方式重新设计代码以便解决问题吗? I would like to use DRY principle with Spring Service in order to reduce the code.我想对 Spring 服务使用 DRY 原则以减少代码。

You could autowire all the ReplyingKafkaTemplate and look up the one matching your endpoint type.您可以自动装配所有ReplyingKafkaTemplate并查找与您的端点类型匹配的那个。

@Autowired
private List<ReplyingKafkaTemplate<String, Object, Object>> templates;

ReplyingKafkaTemplate<String, Object, Object> template = null;
for(ReplyingKafkaTemplate<String, Object, Object> replyingKafkaTemplate :  templates) {
    String defaultTopic = replyingKafkaTemplate.getDefaultTopic();
    if (defaultTopic.contains(endpointType)) {
        template = replyingKafkaTemplate;
        break;
    }
}
ProducerRecord<String, Object> record = new ProducerRecord<>(template.getDefaultTopic(), tf);
RequestReplyFuture<String, Object, Object> replyFuture = template.sendAndReceive(record);
SendResult<String, Object> sendResult = replyFuture.getSendFuture().get(10, TimeUnit.SECONDS);
ConsumerRecord<String, Object> consumerRecord = replyFuture.get(10, TimeUnit.SECONDS);
ResponseFactory processingPeply = (ResponseFactory) consumerRecord.value();

You could also set up your configuration in such a way to create a bean of look up types followed by injecting Map<String, ReplyingKafkaTemplate> for easy look up.您还可以设置配置以创建查找类型的 bean,然后注入Map<String, ReplyingKafkaTemplate>以便于查找。 Since I don't know your set up I can't provide the configuration set up for you.由于我不知道您的设置,因此我无法为您提供配置设置。

@Autowired
private Map<String, ReplyingKafkaTemplate<String, Object, Object>>> templates;

ReplyingKafkaTemplate<String, Object, Object> template = templates.get(endpointType);
ProducerRecord<String, Object> record = new ProducerRecord<>(template.getDefaultTopic(), tf);
RequestReplyFuture<String, Object, Object> replyFuture = template.sendAndReceive(record);
SendResult<String, Object> sendResult = replyFuture.getSendFuture().get(10, TimeUnit.SECONDS);
ConsumerRecord<String, Object> consumerRecord = replyFuture.get(10, TimeUnit.SECONDS);
ResponseFactory processingPeply = (ResponseFactory) consumerRecord.value();

Applying KISS but not so DRY... Put each code blocks from each case into brackets应用 KISS 但不是那么 DRY... 将每个案例中的每个代码块放入括号中

case "email": {
...
}
break;
...

By doing this you reducing the case's scopes then you can reuse the same variable names.通过这样做,您可以减少案例的范围,然后您可以重用相同的变量名。

I think you can use interfaces to separate logic of sending data to different endpoints.我认为您可以使用接口来分离将数据发送到不同端点的逻辑。 Take a look to code below:看看下面的代码:

Main class that sends data and receive Response.主要的 class 发送数据和接收响应。 It doesn't know anything about email, SMS, network senders.它对 email、SMS、网络发件人一无所知。

package com.example.demo.service;

import com.example.demo.dto.Response;
import org.springframework.stereotype.Service;

import java.util.List;

@Service
public class KafkaSender {

    private final List<EndpointSender> senders;

    public KafkaSender(List<EndpointSender> senders) {
        this.senders = senders;
    }

    public Response send(Object data, String endpoint) {
        return senders
            .stream()
            .filter(it -> it.supports(endpoint))
            .findAny()
            .map(it -> it.send(data))
            .orElseGet(() -> new Response("error"));
    }
}

Then we create interface like this:然后我们创建这样的界面:

package com.example.demo.service;

import com.example.demo.dto.Response;

public interface EndpointSender {
    Response send(Object obj);
    boolean supports(String endpoint);
}

And implementations:和实现:

Base class to reduce boilplate code:基础 class 以减少样板代码:

package com.example.demo.service.sender;

import com.example.demo.dto.Response;
import com.example.demo.service.EndpointSender;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.springframework.kafka.requestreply.ReplyingKafkaTemplate;
import org.springframework.kafka.requestreply.RequestReplyFuture;
import org.springframework.kafka.support.SendResult;

import java.util.concurrent.TimeUnit;

public abstract class BaseSender implements EndpointSender {

    public abstract ProducerRecord<String, Object> getRecord(Object obj);

    public abstract ReplyingKafkaTemplate<String, Object, Object> kafkaTemplate();

    @Override
    public Response send(Object obj) {
        try {
            RequestReplyFuture<String, Object, Object> replyFuture = kafkaTemplate().sendAndReceive(getRecord(obj));
            SendResult<String, Object> sendResult = replyFuture.getSendFuture().get(10, TimeUnit.SECONDS);
            ConsumerRecord<String, Object> consumerRecord = replyFuture.get(10, TimeUnit.SECONDS);

            return (Response) consumerRecord.value();
        } catch (Throwable t) {
            throw new RuntimeException(t);
        }
    }
}

And implementations for senders: Email sender:发件人的实现: Email 发件人:

package com.example.demo.service.sender;

import org.apache.kafka.clients.producer.ProducerRecord;
import org.springframework.kafka.requestreply.ReplyingKafkaTemplate;
import org.springframework.stereotype.Service;

@Service
public class EmailSender extends BaseSender {

    private final ReplyingKafkaTemplate<String, Object, Object> processingTransactionEmailReplyKafkaTemplate;

    public EmailSender(ReplyingKafkaTemplate<String, Object, Object> processingTransactionEmailReplyKafkaTemplate) {
        this.processingTransactionEmailReplyKafkaTemplate = processingTransactionEmailReplyKafkaTemplate;
    }

    @Override
    public boolean supports(String endpoint) {
        return "email".equals(endpoint);
    }

    @Override
    public ProducerRecord<String, Object> getRecord(Object obj) {
        return new ProducerRecord<>("tp-email.request", obj);
    }

    @Override
    public ReplyingKafkaTemplate<String, Object, Object> kafkaTemplate() {
        return processingTransactionEmailReplyKafkaTemplate;
    }
}

Sms sender:短信发件人:

package com.example.demo.service.sender;

import org.apache.kafka.clients.producer.ProducerRecord;
import org.springframework.kafka.requestreply.ReplyingKafkaTemplate;
import org.springframework.stereotype.Service;

@Service
public class SmsSender extends BaseSender{

    private final ReplyingKafkaTemplate<String, Object, Object> processingTransactionSmsReplyKafkaTemplate;

    public SmsSender(ReplyingKafkaTemplate<String, Object, Object> processingTransactionSmsReplyKafkaTemplate) {
        this.processingTransactionSmsReplyKafkaTemplate = processingTransactionSmsReplyKafkaTemplate;
    }

    @Override
    public boolean supports(String endpoint) {
        return "sms".equals(endpoint);
    }

    @Override
    public ProducerRecord<String, Object> getRecord(Object obj) {
        return new ProducerRecord<>("tp-sms.request", obj);
    }

    @Override
    public ReplyingKafkaTemplate<String, Object, Object> kafkaTemplate() {
        return processingTransactionSmsReplyKafkaTemplate;
    }
}

Network sender:网络发件人:

package com.example.demo.service.sender;

import org.apache.kafka.clients.producer.ProducerRecord;
import org.springframework.kafka.requestreply.ReplyingKafkaTemplate;
import org.springframework.stereotype.Service;

@Service
public class NetworkSender extends BaseSender{

    private final ReplyingKafkaTemplate<String, Object, Object> processingTransactionNetworkReplyKafkaTemplate;

    public NetworkSender(ReplyingKafkaTemplate<String, Object, Object> processingTransactionNetworkReplyKafkaTemplate) {
        this.processingTransactionNetworkReplyKafkaTemplate = processingTransactionNetworkReplyKafkaTemplate;
    }

    @Override
    public boolean supports(String endpoint) {
        return "network".equals(endpoint);
    }

    @Override
    public ProducerRecord<String, Object> getRecord(Object obj) {
        return new ProducerRecord<>("tp-network.request", obj);
    }

    @Override
    public ReplyingKafkaTemplate<String, Object, Object> kafkaTemplate() {
        return processingTransactionNetworkReplyKafkaTemplate;
    }
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 根据配置向不同的Kafka主题发送消息 - Send message to different Kafka topics based on configuration 我可以根据负载在运行时在 kafka 中发送不同主题的消息吗? - Can I send a message different topics in kafka at runtime depending on the load? 2个kafka使用者可以使用2种不同的SSL配置阅读不同的主题 - 2 kafka consumers to read from 2 different topics with different SSL configuration Spring Kafka 配置,用于 2 个不同的 kafka 集群设置 - Spring Kafka Configuration for 2 different kafka cluster setups Kafka Streams - 根据 Streams 数据发送不同的主题 - Kafka Streams - Send on different topics depending on Streams Data Spring Kafka:按顺序从两个不同的主题中读取 - Spring Kafka: Read from two different topics in order 如何根据过滤器将 csv 数据拆分为两个不同的 kafka 主题 - How to split the csv data to two different kafka topics based on filter 通过Spring-Kafka列出Kafka主题 - List Kafka Topics via Spring-Kafka 什么是最简单的 Spring Kafka @KafkaListener 配置来使用一组压缩主题中的所有记录? - What is the simplest Spring Kafka @KafkaListener configuration to consume all records from a set of compacted topics? 使用kafka流基于消息密钥将消息发送到主题 - send message to topic based on message key using kafka streams
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM