简体   繁体   中英

How to write Kafka Consumer Client in java to consume the messages from multiple brokers?

I was looking for java client (Kafka Consumer) to consume the messages from multiple brokers. please advice

Below is the code written to publish the messages to multiple brokers using simple partitioner.

Topic is created with replication factor "2" and partition "3".

public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster)
{
    List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
    int numPartitions = partitions.size();
    logger.info("Number of Partitions " + numPartitions);
    if (keyBytes == null) 
    {
        int nextValue = counter.getAndIncrement();
        List<PartitionInfo> availablePartitions = cluster.availablePartitionsForTopic(topic);
        if (availablePartitions.size() > 0) 
        {
            int part = toPositive(nextValue) % availablePartitions.size();
            int selectedPartition = availablePartitions.get(part).partition();
            logger.info("Selected partition is " + selectedPartition);
            return selectedPartition;
        } 
        else 
        {
            // no partitions are available, give a non-available partition
            return toPositive(nextValue) % numPartitions;
        }
    } 
    else 
    {
        // hash the keyBytes to choose a partition
        return toPositive(Utils.murmur2(keyBytes)) % numPartitions;
    }

}


public void publishMessage(String message , String topic)
{
    Producer<String, String> producer = null;
    try
    {
     producer = new KafkaProducer<>(producerConfigs());
     logger.info("Topic to publish the message --" + this.topic);
     for(int i =0 ; i < 10 ; i++)
     {
     producer.send(new ProducerRecord<String, String>(this.topic, message));
     logger.info("Message Published Successfully");
     }
    }
    catch(Exception e)
    {
        logger.error("Exception Occured " + e.getMessage()) ;
    }
    finally
    {
     producer.close();
    }
}

public Map<String, Object> producerConfigs() 
{
    loadPropertyFile();
    Map<String, Object> propsMap = new HashMap<>();
    propsMap.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerList);
    propsMap.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    propsMap.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    propsMap.put(ProducerConfig.PARTITIONER_CLASS_CONFIG, SimplePartitioner.class);
    propsMap.put(ProducerConfig.ACKS_CONFIG, "1");
    return propsMap;
}

public Map<String, Object> consumerConfigs() {
    Map<String, Object> propsMap = new HashMap<>();
    System.out.println("properties.getBootstrap()"  + properties.getBootstrap());
    propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, properties.getBootstrap());
    propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
    propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, properties.getAutocommit());
    propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, properties.getTimeout());
    propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, properties.getGroupid());
    propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, properties.getAutooffset());
    return propsMap;
}

@KafkaListener(id = "ID1", topics = "${config.topic}", group = "${config.groupid}")
public void listen(ConsumerRecord<?, ?> record) 
{
    logger.info("Message Consumed " + record);
    logger.info("Partition From which Record is Received " + record.partition());
    this.message = record.value().toString();   
}

bootstrap.servers = [localhost:9092, localhost:9093, localhost:9094]

If you use a regular Java consumer, it will automatically read from multiple brokers. There is no special code you need to write. Just subscribe to the topic(s) you want to consumer and the consumer will connect to the corresponding brokers automatically. You only provide a "single entry point" broker -- the client figures out all other broker of the cluster automatically.

Number of Kafka broker nodes in cluster has nothing to do with consumer logic. Nodes in cluster only used for fault tolerance and bootstrap process. You placing messaging in different partitions of topic based on some custom logic it also not going to effect consumer logic. Even If you have single consumer than that consumer will consume messages from all partitions of Topic subscribed. I request you to check your code with Kafka cluster with single broker node...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM