[英]Prometheus + Kubernetes metrics coming from wrong scrape job
I deployed prometheus server (+ kube state metrics + node exporter + alertmanager) through the prometheus helm chart using the chart's default values, including the chart's default scrape_configs . 我使用图表的默认值(包括图表的默认scrape_configs )通过Prometheus Helm图表部署了Prometheus服务器(+ kube状态指标+节点导出器+ alertmanager)。 The problem is that I expect certain metrics to be coming from a particular job but instead are coming from a different one. 问题是我希望某些指标来自一项特定的工作,而来自另一项工作。
For example, node_cpu_seconds_total
is being provided by the kubernetes-service-endpoints
job but I expect it to come from the kubernetes-nodes
job, ie node-exporter
. 例如, node_cpu_seconds_total
kubernetes-service-endpoints
作业提供了kubernetes-service-endpoints
但我希望它来自kubernetes-nodes
作业,即node-exporter
。 The returned metric's values are accurate but the problem is I don't have the labels that would normally come from kubernetes-nodes
(since kubernetes-nodes
job has role: node
vs role: endpoint
for kubernetes-service-endpoints
. I need these missing labels for advanced querying + dashboards. 返回的度量值是准确的,但问题是我没有通常来自kubernetes-nodes
的标签(因为kubernetes-nodes
作业具有role: node
vs role: endpoint
kubernetes-service-endpoints
。我需要这些缺失的kubernetes-service-endpoints
用于高级查询的标签+仪表板。
Output of node_cpu_seconds_total{mode="idle"}
: node_cpu_seconds_total{mode="idle"}
:
node_cpu_seconds_total{app="prometheus",chart="prometheus-7.0.2",component="node-exporter",cpu="0",heritage="Tiller",instance="10.80.20.46:9100",job="kubernetes-service-endpoints",kubernetes_name="get-prometheus-node-exporter",kubernetes_namespace="default",mode="idle",release="get-prometheus"} | 423673.44 node_cpu_seconds_total{app="prometheus",chart="prometheus-7.0.2",component="node-exporter",cpu="0",heritage="Tiller",instance="10.80.20.52:9100",job="kubernetes-service-endpoints",kubernetes_name="get-prometheus-node-exporter",kubernetes_namespace="default",mode="idle",release="get-prometheus"} | 417097.16
There are no errors in the logs and I do have other kubernetes-nodes
metrics such as up
and storage_operation_errors_total
so node-exporter
is getting scraped. 日志中没有错误,我确实有其他kubernetes-nodes
指标,例如up
和storage_operation_errors_total
因此, node-exporter
被kubernetes-nodes
up
。
I also verified manually that node-exporter
has this particular metric, node_cpu_seconds_total
, with curl <node IP>:9100/metrics | grep node_cpu
我还手动验证了node-exporter
具有此特定指标, node_cpu_seconds_total
,带有curl <node IP>:9100/metrics | grep node_cpu
curl <node IP>:9100/metrics | grep node_cpu
and it has results. curl <node IP>:9100/metrics | grep node_cpu
,它具有结果。
Does the job order definition matter? 作业单定义重要吗? Would one job override the other's metrics if they have the same name? 如果一个作业具有相同的名称,是否会覆盖另一个作业的度量? Should I be dropping metrics for the kubernetes-service-endpoints
job? 我应该删除kubernetes-service-endpoints
作业的指标吗? I'm new to prometheus so any detailed help is appreciated. 我是Prometheus的新手,因此感谢您提供详细的帮助。
I was able to figure out how to add the "missing" labels by navigating to the prometheus service-discovery status UI page. 通过导航到Prometheus服务发现状态UI页面,我能够弄清楚如何添加“缺失”标签。 This page shows all the "Discovered Labels" that can be processed and kept through relabel_configs. 此页面显示了可以通过relabel_configs处理和保留的所有“发现的标签”。 What is processed/kept shows next to "Discovered Labels" under "Target Labels". 处理/保留的内容显示在“目标标签”下的“发现的标签”旁边。 So then it was just a matter of modifying the kubernetes-service-endpoints
job config in scrape_configs
so I add more taget labels. 现在,它只是一个改变的事情kubernetes-service-endpoints
在工作配置scrape_configs
所以我增加更多的taget标签。 Below is exactly what I changed in the chart's scrape_configs
. 以下正是我在图表的scrape_configs
的scrape_configs
。 With this new config, I get namespace
, service
, pod
, and node
added to all metrics if the metric didn't already have them (see honor_labels
). 有了这个新的配置,如果度量标准honor_labels
没有namespace
, service
, pod
和node
,它们就会被添加到所有度量标准中(请参阅honor_labels
)。
- job_name: 'kubernetes-service-endpoints'
+ honor_labels: true
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
- target_label: kubernetes_namespace
+ target_label: namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
- target_label: kubernetes_name
+ target_label: service
+ - source_labels: [__meta_kubernetes_pod_name]
+ action: replace
+ target_label: pod
+ - source_labels: [__meta_kubernetes_pod_node_name]
+ action: replace
+ target_label: node
From the scrape configs, the kubernetes-nodes job probes https://kubernetes.default.svc:443/api/v1/nodes/${node_name}/proxy/metrics
, while kubernetes-service-endpoints job probes every endpoints of those services with prometheus.io/scrape: true
defined, which includes node-exporter. 从刮擦配置中,kubernetes-nodes作业探测https://kubernetes.default.svc:443/api/v1/nodes/${node_name}/proxy/metrics
,而kubernetes-service-endpoints作业探测这些端点的每个端点带有prometheus.io/scrape: true
服务prometheus.io/scrape: true
定义为prometheus.io/scrape: true
,其中包括node-exporter。 So in your configs, the node_cpu_seconds_total metrics is definitely come from kuberenetes-service-endpoints job. 因此,在您的配置中,node_cpu_seconds_total指标肯定来自kuberenetes-service-endpoints作业。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.