简体   繁体   中英

Flink Kafka metrics: How to get them

Ideally, I'd like to run some experiments that measure the pressure on the input consumer of my application (ie, If my application lags behind the input data topic and the messages arrive faster than they are processed). I've been told it's common but I have no clue how to do it

I'm reading the 1.9 Metrics docs and from what I get I have to configure the conf/flink-conf.yaml (in standalone mode) let's say for the jmx reporter like this:

metrics.reporter.jmx.factory.class: org.apache.flink.metrics.jmx.JMXReporterFactory
metrics.reporter.jmx.port: 8789

Then am I supposed to run the flink app with the start-cluster.sh script and then what? Where are those metrics stored?

In the same docs committedOffsets and currentOffsets are specified for Kafka. I believe that the consumer lag is defined as committedOffsets - currentOffsets or not? There are more metrics here (eg, records-lag-avg) and it is stated that those metrics are also exposed. Can anyone provide me with a step by step guide? I'm a bit confused

I would use the Prometheus JMX exporter

Depending on your environment setup use eather a docker image for Prometheus and Grafana or a Kubernetes Helm chart.

there is an opensource grafana dashboard preconfigured for Apache Kafka metrics.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM