繁体   English   中英

如何让 kafka 在 Java 程序中消耗延迟

[英]How to get kafka consume lag in java program

我编写了一个 Java 程序来使用来自 kafka 的消息。 我想监控消费延迟,如何通过java获取?

顺便说一句,我使用:

<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.10.1.1</version>

提前致谢。

如果您不想在项目中包含 kafka(和 scala)依赖项,您可以使用下面的类。 它仅使用 kafka-clients 依赖项。

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.StringDeserializer;

import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.UUID;
import java.util.concurrent.ConcurrentHashMap;
import java.util.function.BinaryOperator;
import java.util.stream.Collectors;

public class KafkaConsumerMonitor {

    public static class PartionOffsets {
        private long endOffset;
        private long currentOffset;
        private int partion;
        private String topic;

        public PartionOffsets(long endOffset, long currentOffset, int partion, String topic) {
            this.endOffset = endOffset;
            this.currentOffset = currentOffset;
            this.partion = partion;
            this.topic = topic;
        }

        public long getEndOffset() {
            return endOffset;
        }

        public long getCurrentOffset() {
            return currentOffset;
        }

        public int getPartion() {
            return partion;
        }

        public String getTopic() {
            return topic;
        }
    }

    private final String monitoringConsumerGroupID = "monitoring_consumer_" + UUID.randomUUID().toString();

    public Map<TopicPartition, PartionOffsets> getConsumerGroupOffsets(String host, String topic, String groupId) {
        Map<TopicPartition, Long> logEndOffset = getLogEndOffset(topic, host);


        KafkaConsumer consumer = createNewConsumer(groupId, host);

        BinaryOperator<PartionOffsets> mergeFunction = (a, b) -> {
            throw new IllegalStateException();
        };

        Map<TopicPartition, PartionOffsets> result = logEndOffset.entrySet()
                .stream()
                .collect(Collectors.toMap(
                        entry -> (entry.getKey()),
                        entry -> {
                            OffsetAndMetadata committed = consumer.committed(entry.getKey());
                            return new PartionOffsets(entry.getValue(), committed.offset(), entry.getKey().partition(), topic);
                        }, mergeFunction));


        return result;
    }

    public Map<TopicPartition, Long> getLogEndOffset(String topic, String host) {
        Map<TopicPartition, Long> endOffsets = new ConcurrentHashMap<>();
        KafkaConsumer<?, ?> consumer = createNewConsumer(monitoringConsumerGroupID, host);
        List<PartitionInfo> partitionInfoList = consumer.partitionsFor(topic);
        List<TopicPartition> topicPartitions = partitionInfoList.stream().map(pi -> new TopicPartition(topic, pi.partition())).collect(Collectors.toList());
        consumer.assign(topicPartitions);
        consumer.seekToEnd(topicPartitions);
        topicPartitions.forEach(topicPartition -> endOffsets.put(topicPartition, consumer.position(topicPartition)));
        consumer.close();
        return endOffsets;
    }

    private static KafkaConsumer<?, ?> createNewConsumer(String groupId, String host) {
        Properties properties = new Properties();
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, host);
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        return new KafkaConsumer<>(properties);
    }
}

我个人直接从我的消费者那里查询 jmx 信息。 我只在 Java 中使用,所以 JMX bean : kafka.consumer:type=consumer-fetch-manager-metrics,client-id=*/records-lag-max可用。

如果 jolokia 在您的类路径中,您可以使用/jolokia/read/kafka.consumer:type=consumer-fetch-manager-metrics,client-id=*/records-lag-max上的 GET 检索值并收集所有结果在一个地方。

还有Burrow很容易配置,但它有点过时了(如果我没记错的话,它不适用于 0.10)。

我正在为我的 api 使用 Spring。 使用下面的代码,您可以通过 java 获取指标。代码有效。

@Component
public class Receiver {

private static final Logger LOGGER =
      LoggerFactory.getLogger(Receiver.class);


@Autowired
private KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;

  public void testlag() {
      for (MessageListenerContainer messageListenerContainer : kafkaListenerEndpointRegistry
                .getListenerContainers()) {
          Map<String, Map<MetricName, ? extends Metric>> metrics = messageListenerContainer.metrics();
          metrics.forEach( (clientid, metricMap) ->{
              System.out.println("------------------------For client id : "+clientid);
              metricMap.forEach((metricName,metricValue)->{
                  //if(metricName.name().contains("lag"))
                  System.out.println("------------Metric name: "+metricName.name()+"-----------Metric value: "+metricValue.metricValue());
              });
          });
            }
  }

您可以在创建消费者时设置 SetStatisticsHandler 回调函数。 比如c#代码如下

var config = new ConsumerConfig()
    {
      BootstrapServers = entrypoints,
      GroupId = groupid,
      EnableAutoCommit = false,
      StatisticsIntervalMs=1000 // statistics interval time
    };

    var consumer = new ConsumerBuilder<Ignore, byte[]>( config )
    .SetStatisticsHandler((consumer,json)=> {
      logger.LogInformation( json ); // statistics metrics, include consumer lag
    } )
    .Build();

有关详细信息,请参阅STATISTICS.md 中的统计指标。

尝试使用 AdminClient#listGroupOffsets(groupID) 来检索与消费者组关联的所有主题分区的偏移量。 例如:

AdminClient client = AdminClient.createSimplePlaintext("localhost:9092");
Map<TopicPartition, Object> offsets = JavaConversions.asJavaMap(
    client.listGroupOffsets("groupID"));
Long offset = (Long) offsets.get(new TopicPartition("topic", 0));
...

编辑
上面的片段显示了如何获取给定分区的提交偏移量。 下面的代码显示了如何检索分区的 LEO。

public long getLogEndOffset(TopicPartition tp) {
    KafkaConsumer consumer = createNewConsumer();
    Collections.singletonList(tp);
    consumer.assign(Collections.singletonList(tp));
    consumer.seekToEnd(Collections.singletonList(tp));
    return consumer.position(tp);
}

private KafkaConsumer<String, String> createNewConsumer() {
    Properties properties = new Properties();
    properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    properties.put(ConsumerConfig.GROUP_ID_CONFIG, "g1");
    properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
    properties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "30000");
    properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
    properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
    return new KafkaConsumer(properties);
}

调用getLogEndOffset返回给定分区的 LEO,然后从中减去提交的偏移量,结果就是滞后。

供您参考,我使用下面的代码完成了这项工作。 基本上,您必须通过计算当前提交的偏移量和结束偏移量之间的增量来手动计算每个主题分区的滞后。

private static Map<TopicPartition, Long> lagOf(String brokers, String groupId) {
    Properties props = new Properties();
    props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, brokers);
    try (AdminClient client = AdminClient.create(props)) {
        ListConsumerGroupOffsetsResult currentOffsets = client.listConsumerGroupOffsets(groupId);

        try {
            // get current offsets of consuming topic-partitions
            Map<TopicPartition, OffsetAndMetadata> consumedOffsets = currentOffsets.partitionsToOffsetAndMetadata()
                    .get(3, TimeUnit.SECONDS);
            final Map<TopicPartition, Long> result = new HashMap<>();
            doWithKafkaConsumer(groupId, brokers, (c) -> {
                // get latest offsets of consuming topic-partitions
                // lag = latest_offset - current_offset
                Map<TopicPartition, Long> endOffsets = c.endOffsets(consumedOffsets.keySet());
                result.putAll(endOffsets.entrySet().stream().collect(Collectors.toMap(entry -> entry.getKey(),
                        entry -> entry.getValue() - consumedOffsets.get(entry.getKey()).offset())));
            });
            return result;
        } catch (InterruptedException | ExecutionException | TimeoutException e) {
            log.error("", e);
            return Collections.emptyMap();
        }
    }
}

public static void doWithKafkaConsumer(String groupId, String brokers,
        Consumer<KafkaConsumer<String, String>> consumerRunner) {
    Properties props = new Properties();
    props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, brokers);
    props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
    props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());

    try (final KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props)) {
        consumerRunner.accept(consumer);
    }
}

请注意,一个消费者组可能同时消费多个主题,因此如果您需要获取每个主题的延迟,则必须按主题对结果进行分组和聚合。

    Map<TopicPartition, Long> lags = lagOf(brokers, group);
    Map<String, Long> topicLag = new HashMap<>();
    lags.forEach((tp, lag) -> {
        topicLag.compute(tp.topic(), (k, v) -> v == null ? lag : v + lag);
    });

运行此独立代码。 (依赖于 kafka-clients-2.6.0.jar)

import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Properties;
import java.util.Set;
import java.util.UUID;
import java.util.concurrent.ConcurrentHashMap;
import java.util.function.BinaryOperator;
import java.util.stream.Collectors;

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.StringDeserializer;

public class CosumerGroupLag {

static String host = "localhost:9092";
static String topic = "topic02";
static String groupId = "test-group";

public static void main(String... vj) {
    CosumerGroupLag cgl = new CosumerGroupLag();

    while (true) {
        Map<TopicPartition, PartionOffsets> lag = cgl.getConsumerGroupOffsets(host, topic, groupId);
        System.out.println("$$LAG = " + lag);
        try {
            Thread.sleep(10000);
        } catch (InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }
}

private final String monitoringConsumerGroupID = "monitoring_consumer_" + UUID.randomUUID().toString();

public Map<TopicPartition, PartionOffsets> getConsumerGroupOffsets(String host, String topic, String groupId) {
    Map<TopicPartition, Long> logEndOffset = getLogEndOffset(topic, host);

    Set<TopicPartition> topicPartitions = new HashSet<>();
    for (Entry<TopicPartition, Long> s : logEndOffset.entrySet()) {
        topicPartitions.add(s.getKey());
    }
    
    KafkaConsumer<String, Object> consumer = createNewConsumer(groupId, host);
    Map<TopicPartition, OffsetAndMetadata> comittedOffsetMeta = consumer.committed(topicPartitions);

    BinaryOperator<PartionOffsets> mergeFunction = (a, b) -> {
        throw new IllegalStateException();
    };
    Map<TopicPartition, PartionOffsets> result = logEndOffset.entrySet().stream()
            .collect(Collectors.toMap(entry -> (entry.getKey()), entry -> {
                OffsetAndMetadata committed = comittedOffsetMeta.get(entry.getKey());
                long currentOffset = 0;
                if(committed != null) { //committed offset will be null for unknown consumer groups
                    currentOffset = committed.offset();
                }
                return new PartionOffsets(entry.getValue(), currentOffset, entry.getKey().partition(), topic);
            }, mergeFunction));

    return result;
}

public Map<TopicPartition, Long> getLogEndOffset(String topic, String host) {
    Map<TopicPartition, Long> endOffsets = new ConcurrentHashMap<>();
    KafkaConsumer<?, ?> consumer = createNewConsumer(monitoringConsumerGroupID, host);
    List<PartitionInfo> partitionInfoList = consumer.partitionsFor(topic);
    List<TopicPartition> topicPartitions = partitionInfoList.stream()
            .map(pi -> new TopicPartition(topic, pi.partition())).collect(Collectors.toList());
    consumer.assign(topicPartitions);
    consumer.seekToEnd(topicPartitions);
    topicPartitions.forEach(topicPartition -> endOffsets.put(topicPartition, consumer.position(topicPartition)));
    consumer.close();
    return endOffsets;
}

private static KafkaConsumer<String, Object> createNewConsumer(String groupId, String host) {
    Properties properties = new Properties();
    properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, host);
    properties.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
    properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
    properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    return new KafkaConsumer<>(properties);
}

private static class PartionOffsets {
    private long lag;
    private long timestamp = System.currentTimeMillis();
    private long endOffset;
    private long currentOffset;
    private int partion;
    private String topic;

    public PartionOffsets(long endOffset, long currentOffset, int partion, String topic) {
        this.endOffset = endOffset;
        this.currentOffset = currentOffset;
        this.partion = partion;
        this.topic = topic;
        this.lag = endOffset - currentOffset;
    }

    @Override
    public String toString() {
        return "PartionOffsets [lag=" + lag + ", timestamp=" + timestamp + ", endOffset=" + endOffset
                + ", currentOffset=" + currentOffset + ", partion=" + partion + ", topic=" + topic + "]";
    }

}
}

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM