简体   繁体   English

根据配置向不同的Kafka主题发送消息

[英]Send message to different Kafka topics based on configuration

I would like to send data to different Kafka messages based on configuration:我想根据配置将数据发送到不同的 Kafka 消息:

ResponseFactory processingPeply = null;

        switch(endpointType)
        {
            case "email":
                ProducerRecord<String, Object> record = new ProducerRecord<>("tp-email.request", tf);
                RequestReplyFuture<String, Object, Object> replyFuture = processingTransactionEmailReplyKafkaTemplate.sendAndReceive(record);
                SendResult<String, Object> sendResult = replyFuture.getSendFuture().get(10, TimeUnit.SECONDS);
                ConsumerRecord<String, Object> consumerRecord = replyFuture.get(10, TimeUnit.SECONDS);

                processingPeply = (ResponseFactory) consumerRecord.value();
              break;
            case "sms":
                ProducerRecord<String, Object> record = new ProducerRecord<>("tp-sms.request", tf);
                RequestReplyFuture<String, Object, Object> replyFuture = processingTransactionSmsReplyKafkaTemplate.sendAndReceive(record);
                SendResult<String, Object> sendResult = replyFuture.getSendFuture().get(10, TimeUnit.SECONDS);
                ConsumerRecord<String, Object> consumerRecord = replyFuture.get(10, TimeUnit.SECONDS);

                processingPeply = (ResponseFactory) consumerRecord.value();
              break;
            case "network":
                ProducerRecord<String, Object> record = new ProducerRecord<>("tp-network.request", tf);
                RequestReplyFuture<String, Object, Object> replyFuture = processingTransactionNetworkReplyKafkaTemplate.sendAndReceive(record);
                SendResult<String, Object> sendResult = replyFuture.getSendFuture().get(10, TimeUnit.SECONDS);
                ConsumerRecord<String, Object> consumerRecord = replyFuture.get(10, TimeUnit.SECONDS);

                processingPeply = (ResponseFactory) consumerRecord.value();
              break;
              
            default:
                processingPeply = ResponseFactory.builder().status("error").build();
        } 

I currently get:我目前得到:

  • Variable 'record' is already defined in the scope变量“记录”已在 scope 中定义
  • Variable 'sendResult' is already defined in the scope变量“sendResult”已在 scope 中定义
  • Variable 'consumerRecord' is already defined in the scope变量“consumerRecord”已在 scope 中定义

Do you know how I can redesign the code in some better way so I can solve the issue?您知道如何以更好的方式重新设计代码以便解决问题吗?

Suggesting here 4 possible approaches, in order to avoid some switch blocks in core code and honor one of DRY 's principles, which is avoid duplicated code.在这里推荐 4 种可能的方法,以避免核心代码中的一些 switch 块,并遵守DRY的原则之一,即避免重复代码。 (DRY represents a much bigger concept than just not repeating code). (DRY 代表了一个比不重复代码更大的概念)。


1- GeneralHandler and endpoint type childs 1- GeneralHandler 和端点类型的孩子

Something like a hierarchical class' tree here, with the different endpoints being an extension of an abstract/general father.这里有点像分层类的树,不同的端点是抽象/一般父亲的扩展。

                      [GeneralKafkaHandler] - core/common logic
               _______________ | ________________
              |                |                |
              v                v                v
         {SmsHandler}    {EmailHandler}   {NetworkHandler}  -- specific params/methods

For example, getTopic() and getFuture() could be abstract on the father, and implemented by each child with its own logic.例如, getTopic()getFuture()可以是父亲的abstract ,并由每个孩子用自己的逻辑实现。 Another option would be making getKafkaTemplate() another abstract method ( choose between getFuture() or getKafkaTemplate() ) .另一种选择是让getKafkaTemplate()成为另一种抽象方法(getFuture()getKafkaTemplate()之间选择) This is a simplification of the hierarchy, and the topic is retrieved from the constructor.这是层次结构的简化,主题是从构造函数中检索的。

Abstract father

abstract class GeneralKafkaHandler 
{
   public abstract RequestReplyFuture<String, Object, Object> 
                   getFuture(ProducerRecord<>r);
   public abstract String getName();

   protected String topic;
   protected int id;
   ResponseFactory processingPeply = null;

   public GeneralKafkaHandler(String topic, int id) 
   {
       this.topic = topic; 
       this.id = id;
   }

   public void handle(Object tf) //the main/common logic is implemented here
   {
       ProducerRecord<String, Object> record = new ProducerRecord<>(topic, tf);
       RequestReplyFuture<String, Object, Object> rf = getFuture(record);  
       SendResult<String, Object> sr = rf.getSendFuture().get(10, TimeUnit.SECONDS);
       ConsumerRecord<String, Object> consumerRecord = rf.get(10,TimeUnit.SECONDS);
       processingPeply = (ResponseFactory) consumerRecord.value();
   }

   //...
}

SmsHandler

class SmsKafkaHandler extends GeneralKafkaHandler 
{
   //Sms specific variables, methods,..
    
   public SmsKafkaHandler(String topic, int id) 
   {
      super(topic, id);
      //sms code
   }

   @Override
   public String getName() 
   {
      return "SMSHandler_"+topic+"_"+id);
   }

   @Override
   public RequestReplyFuture<String, Object, Object> getFuture(ProducerRecord<> r)
   {
      //sms code
      return processingTransactionSmsReplyKafkaTemplate.sendAndReceive(r);
   }

   //...
}

Main ( just an example ) Main只是一个例子

Map<String, GeneralKafkaHandler> handlerMap = new HashMap<>();
handlerMap.put("sms", new SmsKafkaHandler("tp-sms.request",1));
handlerMap.put("smsplus", new SmsKafkaHandler("tp-sms-plus.request",2));
handlerMap.put("email", new EmailKafkaHandler("tp-email.request",1));
//...

handlerMap.get(endpointType.toLowerCase()).handle(tf);

There are different options here;这里有不同的选择; For example, sendAndReceive is also a common method for all types, so the getFuture() could be altered by just a getTemplate() method.例如,sendAndReceive 也是所有类型的通用方法,因此getFuture()可以仅通过getTemplate()方法进行更改。 There are so many options to play with here.这里有很多选择。

This approach would be a good idea if you need/wish to manage each endpoint more in dept;如果您需要/希望更深入地管理每个端点,这种方法将是一个好主意; you could consider it if you think the different management is worth, or will be worth in the future;如果您认为不同的管理值得或将来值得,您可以考虑它; As the core mechanism is the same, different extensions would allow you to fastly implement different endpoint types.由于核心机制相同,不同的扩展可以让你快速实现不同的端点类型。


2- Custom entity 2- 自定义实体

In essence, there are just 2 different elements regarding the endpoint type:本质上,关于端点类型只有 2 个不同的元素:

  1. Topic
  2. ReplyingKafkaTemplate

You could wrap them into a single Object.您可以将它们包装成一个 Object。 For example:例如:

public class TopicEntity
{
  public final String topic;
  public final ReplyingKafkaTemplate<String,Object,Object> template;

  public TopicEntity(String topic, ReplyingKafkaTemplate<String,Object,Object> template)
  {
     this.topic = topic;
     this.template = template;
  }    
}

So then you can get this without modifying your current code ( here I assume your templates are already initialized ):那么你可以在不修改当前代码的情况下得到它(这里我假设你的模板已经初始化):

TopicEntity smsE = new TopicEntity("tp-sms.request",
                                   processingTransactionSmsReplyKafkaTemplate);
TopicEntity mailE = new TopicEntity("tp-email.request",
                                   processingTransactionEmailReplyKafkaTemplate);

Map<String, TopicEntity> handlerMap = new HashMap<>();
handlerMap.put("sms", smsE);
handlerMap.put("email",mailE);
//...

TopicEntity te = handlerMap.get(endpointType.toLowerCase()); 
//Based on endpoint
ProducerRecord<String, Object> record = new ProducerRecord<>(te.topic, tf);
RequestReplyFuture<String, Object, Object> rf = te.template.sendAndReceive(record);
//Common regardless of endpoint
SendResult<String, Object> sr = rf.getSendFuture().get(10, TimeUnit.SECONDS);
ConsumerRecord<String, Object> consumerRecord = rf.get(10,TimeUnit.SECONDS);
processingPeply = (ResponseFactory) consumerRecord.value();

Pretty simple and also avoids duplicate code;非常简单,也避免了重复代码; The entity would also allow you to define specific characteristics for each endpoint.该实体还允许您为每个端点定义特定特征。


3- Getter methods 3- 吸气剂方法

The simpler way, just to make the main code look cleaner.更简单的方法,只是让主代码看起来更干净。

ProducerRecord<String, Object> record = new ProducerRecord<>(getTopic(endpointType),tf);
RequestReplyFuture<String, Object, Object> replyFuture = getFuture(endpointType,record);
/*rest of the code here (common regardless type)*/

And the getters:和吸气剂:

String getTopic(String e)
{
   switch(e.toLowerCase())
   {
      case "email"  : return "tp-email.request"; 
      case "sms"    : return "tp-sms.request";
      case "network": return "tp-network.request";
      default : /*handle error*/ return null; 
                /*kafka's response - "topic cannot be null");*/
    }
}

RequestReplyFuture<String, Object, Object> getFuture(String e, ProducerRecord<> r)
{
  switch(e.toLowerCase())
  {
     case "email": 
          return processingTransactionEmailReplyKafkaTemplate.sendAndReceive(r);
     case "sms" :
           return processingTransactionSmsReplyKafkaTemplate.sendAndReceive(r);
     case "network": 
           return processingTransactionNetworkReplyKafkaTemplate.sendAndReceive(r);
     default : /*handle error*/ return null;
  }            /*this one should never be executed*/
}

4- Single setter 4-单传

Well, maybe this one is the simpler way...it would be a fight between approach 3 and 4.好吧,也许这是更简单的方法……这将是方法 3 和方法 4 之间的斗争。

ReplyingKafkaTemplate template;
String topic;
//...

void setParameters(String e)
{
  switch(e.toLowerCase())
  {
    case "email"  : 
          topic = "tp-email.request"; 
          template = processingTransactionEmailReplyKafkaTemplate;
          break;         
    case "sms"    :       
          topic = "tp-sms.request"; 
          template = processingTransactionSmsReplyKafkaTemplate;
          break;         
     //...
   }
}
//...

setParameters(endpointType);

ProducerRecord<String, Object> r= new ProducerRecord<>(topic,tf);
RequestReplyFuture<String, Object, Object> replyFuture = template.sendAndReceive(r);
SendResult<String, Object> sr = rf.getSendFuture().get(10, TimeUnit.SECONDS);
ConsumerRecord<String, Object> consumerRecord = rf.get(10,TimeUnit.SECONDS);
processingPeply = (ResponseFactory) consumerRecord.value();

1.a)- Spring and GeneralHandler 1.a)- Spring 和 GeneralHandler

Spoiler: I don't know sh#, about Spring.剧透:我不知道 sh#,关于 Spring。 so this may be totally incorrect.所以这可能是完全不正确的。

From what I've read here , the abstract class doesn't need any annotation, just the fields that may be accessed by the childs should need @Autowired .从我在这里读到的内容来看,抽象 class 不需要任何注释,只是孩子可能访问的字段需要@Autowired

abstract class GeneralKafkaHandler 
{
   public abstract RequestReplyFuture<String, Object, Object> 
                   getFuture(ProducerRecord<>r);
   public abstract String getName();

   @Autowired
   protected String topic;
   @Autowired
   protected int id;

   ResponseFactory processingPeply = null;

   public GeneralKafkaHandler(String topic, int id) 
   {
       this.topic = topic; 
       this.id = id;
   }

   public void handle(Object tf) //the main/common logic is implemented here
   {
       ProducerRecord<String, Object> record = new ProducerRecord<>(topic, tf);
       RequestReplyFuture<String, Object, Object> rf = getFuture(record);  
       SendResult<String, Object> sr = rf.getSendFuture().get(10, TimeUnit.SECONDS);
       ConsumerRecord<String, Object> consumerRecord = rf.get(10,TimeUnit.SECONDS);
       processingPeply = (ResponseFactory) consumerRecord.value();
   }

   //...
}

And the children should have the @Component annotation, as well as @Autowired in the constructor;并且孩子应该有@Component注解,以及构造函数中的@Autowired I'm not really sure about the last one, as the examples I've seen also include fields that are also defined in the child.我不太确定最后一个,因为我看到的示例还包括在子项中定义的字段。

@Component
class SmsKafkaHandler extends GeneralKafkaHandler 
{
   //Sms specific variables, methods,..
    
   @Autowired  //not sure about this..
   public SmsKafkaHandler(String topic, int id) 
   {
      super(topic, id);
      //sms code
   }

   @Override
   public String getName() 
   {
      return "SMSHandler_"+topic+"_"+id);
   }

   @Override
   public RequestReplyFuture<String, Object, Object> getFuture(ProducerRecord<> r)
   {
      //sms code
      return processingTransactionSmsReplyKafkaTemplate.sendAndReceive(r);
   }

   //...
}

Really, I don't know what I'm talking about regarding this Spring solution, I don't even know what those annotations are, the meme of the dog looking at a computer represents me at this moment.真的,关于这个 Spring 解决方案,我不知道我在说什么,我什至不知道那些注释是什么,此时狗看着电脑的表情包代表了我。 So take this carefully...所以请谨慎对待...


DRY is for losers DRY 是给失败者的

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 实现 Spring 服务,根据配置向不同的 Kafka 主题发送消息 - Implement Spring Service to send message to different Kafka topics based on configuration 我可以根据负载在运行时在 kafka 中发送不同主题的消息吗? - Can I send a message different topics in kafka at runtime depending on the load? 2个kafka使用者可以使用2种不同的SSL配置阅读不同的主题 - 2 kafka consumers to read from 2 different topics with different SSL configuration Kafka Streams - 根据 Streams 数据发送不同的主题 - Kafka Streams - Send on different topics depending on Streams Data 如何根据过滤器将 csv 数据拆分为两个不同的 kafka 主题 - How to split the csv data to two different kafka topics based on filter 使用kafka流基于消息密钥将消息发送到主题 - send message to topic based on message key using kafka streams 同时阅读多个Kafka主题中的1条消息 - Read 1 message concurrently from multiple Kafka topics 用于多个主题的 kafka Avro 消息解串器 - kafka Avro message deserializer for multiple topics 使用 Java 的 Kafka Stream:发送到多个主题 - Kafka Stream with Java: Send TO multiple topics 优良作法是依次向Kafka发送不同的消息类型 - Good practice to send different message types in order, to Kafka
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM