[英]Why I cannot view the Flink metrics on the Prometheus dashboard?
I configured Apache Flink to send metrics to Prometheus through the conf/flink-conf.yaml
file: 我将Apache Flink配置为通过
conf/flink-conf.yaml
文件向Prometheus发送指标:
metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter
metrics.reporter.prom.host: 192.168.56.1
metrics.reporter.prom.port: 9250-9260
then I configured Prometheus on the file /etc/prometheus/prometheus.yml
: 然后我在文件
/etc/prometheus/prometheus.yml
上配置了Prometheus:
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'node_exporter'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9100']
- job_name: 'flink'
scrape_interval: 5s
static_configs:
- targets: ['jobmanager:9250', 'taskmanager1:9251', 'taskmanager2:9252']
The log of the first task manager says that Prometheus is configured: 第一个任务管理器的日志表明Prometheus已配置:
2019-03-29 17:04:57,347 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: metrics.reporter.prom.class, org.apache.flink.metrics.prometheus.PrometheusReporter
2019-03-29 17:04:57,348 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: metrics.reporter.prom.host, 192.168.56.1
2019-03-29 17:04:57,349 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: metrics.reporter.prom.port, 9250-9260
...
2019-03-29 17:04:59,463 INFO org.apache.flink.runtime.metrics.MetricRegistryImpl - Configuring prom with {port=9250-9260, host=192.168.56.1, class=org.apache.flink.metrics.prometheus.PrometheusReporter}.
2019-03-29 17:04:59,479 INFO org.apache.flink.metrics.prometheus.PrometheusReporter - Started PrometheusReporter HTTP server on port 9251.
2019-03-29 17:04:59,479 INFO org.apache.flink.runtime.metrics.MetricRegistryImpl - Reporting metrics for reporter prom of type org.apache.flink.metrics.prometheus.PrometheusReporter.
O copied the jar file flink-metrics-prometheus_2.11-1.7.2.jar
to the lib
directory of both nodes of my FLink instance. O将jar文件
flink-metrics-prometheus_2.11-1.7.2.jar
复制到我的FLink实例的两个节点的lib
目录中。 And I have a RichMapper class which exposes a Counter and a Meter. 我有一个RichMapper类,它暴露了一个Counter和一个Meter。 Why Can I not see the metrics on Prometheus dashboard?
为什么我在Prometheus仪表板上看不到指标?
I deply my application using this command ./bin/flink run -c org.sense.flink.App ../../../felipe/eclipse-workspace/explore-flink/target/explore-flink.jar 14 192.168.56.20 &
and I do can see the output on one of the taskmanager logs. 我使用这个命令来
./bin/flink run -c org.sense.flink.App ../../../felipe/eclipse-workspace/explore-flink/target/explore-flink.jar 14 192.168.56.20 &
我的应用程序./bin/flink run -c org.sense.flink.App ../../../felipe/eclipse-workspace/explore-flink/target/explore-flink.jar 14 192.168.56.20 &
我可以看到其中一个任务管理器日志的输出。
public static class SensorTypeMapper
extends RichMapFunction<MqttSensor, Tuple2<CompositeKeySensorType, MqttSensor>> {
private static final long serialVersionUID = -4080196110995184486L;
private transient Counter counter;
private transient Meter meter;
@Override
public void open(Configuration config) {
this.counter = getRuntimeContext().getMetricGroup().counter("counterSensorTypeMapper");
com.codahale.metrics.Meter dropwizardMeter = new com.codahale.metrics.Meter();
this.meter = getRuntimeContext().getMetricGroup().meter("meterSensorTypeMapper",
new DropwizardMeterWrapper(dropwizardMeter));
}
@Override
public Tuple2<CompositeKeySensorType, MqttSensor> map(MqttSensor value) throws Exception {
this.meter.markEvent();
this.counter.inc();
// every sensor key: sensorId, sensorType, platformId, platformType, stationId
// Integer sensorId = value.getKey().f0;
String sensorType = value.getKey().f1;
Integer platformId = value.getKey().f2;
// String platformType = value.getKey().f3;
Integer stationId = value.getKey().f4;
CompositeKeySensorType compositeKey = new CompositeKeySensorType(stationId, platformId, sensorType);
return Tuple2.of(compositeKey, value);
}
}
I solved. 我解决了 I just have to configure the correct hostname on the
targets
property of the file /etc/prometheus/prometheus.yml
我只需要在文件
/etc/prometheus/prometheus.yml
的targets
属性上配置正确的主机名
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'node_exporter'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9100']
- job_name: 'flink'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9250', 'localhost:9251', '192.168.56.20:9250']
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.