简体   繁体   English

如何实现与jdbc连接池相同的kafka连接池

[英]How to implement kafka connection pool same as jdbc connection pool

I am having a one Kafka producer class that produces data with initializing connection every time, which is time consuming process, so to make it more faster I want to implement Kafka connection pooling. 我有一个单一的Kafka生产者类,每次生成带有初始化连接的数据,这是一个耗时的过程,因此,为了使其更快,我想实现Kafka连接池。 I searched a lot for solution but did not find the right one.Please redirect me to right solution.Thanks. 我搜索了很多解决方案,但没有找到合适的解决方案。请将我重定向到正确的解决方案。 My Kafka Producer class is: 我的Kafka Producer课程是:

import java.util.Properties;
import org.apache.log4j.Logger;
import com.bisil.report.queue.QueueDBFeederService;

import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

@SuppressWarnings("deprecation")
public class KafkaProducer1 implements ProducerService {
            private static Producer<Integer, String> producer;
            private static final String topic= "mytopic1";
            private Logger logger = Logger.getLogger(KafkaProducer1.class);

@Override   
public void initialize() {
        try {
            Properties producerProps = new Properties();
            producerProps.put("metadata.broker.list", "192.168.21.182:9092");
            producerProps.put("serializer.class", "kafka.serializer.StringEncoder");
            producerProps.put("request.required.acks", "1");
            ProducerConfig producerConfig = new ProducerConfig(producerProps);
            producer = new Producer<Integer, String>(producerConfig);
        } catch (Exception e) {
            logger.error("Exception while sending data to server "+e,e);

        }
        logger.info("Test Message");
    }

    @Override
public void publishMessage(String jsonPacket) {
            KeyedMessage<Integer, String> keyedMsg = new KeyedMessage<Integer, String >(topic, jsonPacket);
            producer.send(keyedMsg);
         // This publishes message on given topic
    }

    @Override
public void callMessage(String jsonPacket){
            initialize();
            // Publish message
            publishMessage(jsonPacket);
            //Close the producer
            producer.close();

    }

}

You can put all messages in array, iteratively publish it to topic and then close producer when done.This way only one time initialization and one time close or destroy gets called.You can do something like this 您可以将所有消息放入数组中,以迭代方式将其发布到主题上,然后在完成时关闭生产者,这样仅一次调用一次,一次关闭或销毁就可以被调用。

String[] jsonPacket///  your message array
 for (long i = 0; i < jsonPacket.length; i++) {
            producer.send(new KeyedMessage<Integer, String>(topic, jsonPacket[i]));
        }
        producer.close();

If my understanding is correct, you need pool of producer objects which can be always available when a new publish request occurs and wait for other request when the task completed, your requirement may matches 'object pool'(A Object Factory with executor frame work(pool) in java) which is implemented by Apache commons as you need to get KafkaProducer object from the pool. 如果我的理解是正确的,那么您需要生产者对象池,当发生新的发布请求时,该对象始终可用,并在任务完成时等待其他请求,您的要求可能与“对象池”相匹配(具有执行程序框架工作的对象工厂(池)(在Java中),由Apache Commons实现,因为您需要从池中获取KafkaProducer对象。 Object pool concept which is implemented and available in apache commons jar. 对象池概念已在apache commons jar中实现并可用。 https://dzone.com/articles/creating-object-pool-java https://dzone.com/articles/creating-object-pool-java

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM