简体   繁体   English

如何在Java中编写Kafka Consumer Client以使用来自多个代理的消息?

[英]How to write Kafka Consumer Client in java to consume the messages from multiple brokers?

I was looking for java client (Kafka Consumer) to consume the messages from multiple brokers. 我正在寻找Java客户端(Kafka Consumer)来使用来自多个代理的消息。 please advice 请指教

Below is the code written to publish the messages to multiple brokers using simple partitioner. 下面是使用简单的分区程序将消息发布到多个代理的代码。

Topic is created with replication factor "2" and partition "3". 使用复制因子“ 2”和分区“ 3”创建主题。

public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster)
{
    List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
    int numPartitions = partitions.size();
    logger.info("Number of Partitions " + numPartitions);
    if (keyBytes == null) 
    {
        int nextValue = counter.getAndIncrement();
        List<PartitionInfo> availablePartitions = cluster.availablePartitionsForTopic(topic);
        if (availablePartitions.size() > 0) 
        {
            int part = toPositive(nextValue) % availablePartitions.size();
            int selectedPartition = availablePartitions.get(part).partition();
            logger.info("Selected partition is " + selectedPartition);
            return selectedPartition;
        } 
        else 
        {
            // no partitions are available, give a non-available partition
            return toPositive(nextValue) % numPartitions;
        }
    } 
    else 
    {
        // hash the keyBytes to choose a partition
        return toPositive(Utils.murmur2(keyBytes)) % numPartitions;
    }

}


public void publishMessage(String message , String topic)
{
    Producer<String, String> producer = null;
    try
    {
     producer = new KafkaProducer<>(producerConfigs());
     logger.info("Topic to publish the message --" + this.topic);
     for(int i =0 ; i < 10 ; i++)
     {
     producer.send(new ProducerRecord<String, String>(this.topic, message));
     logger.info("Message Published Successfully");
     }
    }
    catch(Exception e)
    {
        logger.error("Exception Occured " + e.getMessage()) ;
    }
    finally
    {
     producer.close();
    }
}

public Map<String, Object> producerConfigs() 
{
    loadPropertyFile();
    Map<String, Object> propsMap = new HashMap<>();
    propsMap.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerList);
    propsMap.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    propsMap.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    propsMap.put(ProducerConfig.PARTITIONER_CLASS_CONFIG, SimplePartitioner.class);
    propsMap.put(ProducerConfig.ACKS_CONFIG, "1");
    return propsMap;
}

public Map<String, Object> consumerConfigs() {
    Map<String, Object> propsMap = new HashMap<>();
    System.out.println("properties.getBootstrap()"  + properties.getBootstrap());
    propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, properties.getBootstrap());
    propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
    propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, properties.getAutocommit());
    propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, properties.getTimeout());
    propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, properties.getGroupid());
    propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, properties.getAutooffset());
    return propsMap;
}

@KafkaListener(id = "ID1", topics = "${config.topic}", group = "${config.groupid}")
public void listen(ConsumerRecord<?, ?> record) 
{
    logger.info("Message Consumed " + record);
    logger.info("Partition From which Record is Received " + record.partition());
    this.message = record.value().toString();   
}

bootstrap.servers = [localhost:9092, localhost:9093, localhost:9094] bootstrap.servers = [本地主机:9092,本地主机:9093,本地主机:9094]

If you use a regular Java consumer, it will automatically read from multiple brokers. 如果您使用常规Java使用者,它将自动从多个代理读取。 There is no special code you need to write. 您无需编写特殊代码。 Just subscribe to the topic(s) you want to consumer and the consumer will connect to the corresponding brokers automatically. 只需订阅您想要使用的主题,该使用者将自动连接到相应的代理。 You only provide a "single entry point" broker -- the client figures out all other broker of the cluster automatically. 您仅提供“单一入口点”代理程序-客户端会自动找出集群中的所有其他代理程序。

Number of Kafka broker nodes in cluster has nothing to do with consumer logic. 集群中Kafka代理节点的数量与使用者逻辑无关。 Nodes in cluster only used for fault tolerance and bootstrap process. 群集中的节点仅用于容错和引导过程。 You placing messaging in different partitions of topic based on some custom logic it also not going to effect consumer logic. 您基于某些自定义逻辑将消息传递放置在主题的不同分区中,这也不会影响使用者逻辑。 Even If you have single consumer than that consumer will consume messages from all partitions of Topic subscribed. 即使您只有一个使用者,该使用者也将使用来自订阅的Topic的所有分区的消息。 I request you to check your code with Kafka cluster with single broker node... 我要求您使用具有单个代理节点的Kafka集群检查代码...

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 我的KafkaSpout不会在HDP中使用来自Kafka Brokers的消息 - My KafkaSpout doesn't consume messages from Kafka Brokers in HDP Kafka消费者如何从多个分配的分区中消费 - How does Kafka Consumer Consume from Multiple assigned Partition 如何在春季卡夫卡消费者消费之前过滤卡夫卡消息 - How to filter Kafka messages before consumer consume in spring Kafka 如何从与不同代理关联的多个 Kafka 主题中消费? - How can I consume from multiple Kafka topics that are associated with different brokers? Kafka Messages-Java生产者和消费者客户端 - Kafka Messages - Producer & Consumer Client in Java 在所有 kafka-pod 升级后,java 中的 kafka 消费者客户端无法重新连接到 kubernetes kafka 代理 - kafka consumer client in java can't reconnect to kubernetes kafka brokers after all of kafka-pods are upgraded Java Kafka使用者组无法使用一些消息 - Java Kafka consumer group failing to consume a few messages Spring Kafka多个使用者针对单个主题消耗不同的消息 - Spring Kafka multiple consumer for single topic consume different messages 如何在 Java 中使用来自 Kafka 的消息,从特定偏移量开始 - How to consume messages from Kafka in Java, starting from a specific offset 如何在java中一一从Kafka Consumer获取消息? - How to get messages from Kafka Consumer one by one in java?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM