简体   繁体   中英

Prometheus: Metric from pod not found in Prometheus

I am currently running metrics server inside my pod. The data is being sent inside pod at localhost:9090. I am able to get the data inside pod via curl. The deployment.yaml is annotated to scrape the data, but I don't see any new metrics in pod. What am I doing wrong?

metrics I see inside pod:

cpu_usage{process="COMMAND", pid="PID"} %CPU
cpu_usage{process="/bin/sh", pid="1"} 0.0
cpu_usage{process="sh", pid="8"} 0.0
cpu_usage{process="/usr/share/filebeat/bin/filebeat-god", pid="49"} 0.0
cpu_usage{process="/usr/share/filebeat/bin/filebeat", pid="52"} 0.0
cpu_usage{process="php-fpm:", pid="66"} 0.0
cpu_usage{process="php-fpm:", pid="67"} 0.0
cpu_usage{process="php-fpm:", pid="68"} 0.0
cpu_usage{process="nginx:", pid="69"} 0.0
cpu_usage{process="nginx:", pid="70"} 0.0
cpu_usage{process="nginx:", pid="71"} 0.0
cpu_usage{process="/bin/sh", pid="541"} 0.0
cpu_usage{process="bash", pid="556"} 0.0
cpu_usage{process="/bin/sh", pid="1992"} 0.0
cpu_usage{process="ps", pid="1993"} 0.0
cpu_usage{process="/bin/sh", pid="1994"} 0.0

deployment.yaml

  template: 
    metadata:
      labels: 
        app: supplier-service
      annotations:
        prometheus.io/path: /
        prometheus.io/scrape: 'true'
        prometheus.io/port: '9090'



          ports: 
            - containerPort: 80
            - containerPort: 443
            - containerPort: 9090

prometheus.yml

global:
  scrape_interval: 15s # By default, scrape targets every 15seconds. # Attach these labels to any time series or alerts when #communicating with external systems (federation, re$
  external_labels:
    monitor: 'codelab-monitor'
# Scraping Prometheus itself
scrape_configs:
- job_name: 'prometheus'
  scrape_interval: 5s
  static_configs:
  - targets: ['localhost:9090']
- job_name: 'kubernetes-service-endpoints'
  scrape_interval: 5s
  kubernetes_sd_configs:
  - role: endpoints
  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_service_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
    action: keep
    regex: true
  - source_labels: [__address__]
    action: replace
    regex: ([^:]+)(?::\d+)?
    replacement: $1:9090
    target_label: __address__
  - source_labels: [__meta_kubernetes_service_name]
    action: replace
    target_label: kubernetes_name

Port numbers are correct. What am I doing wrong?

Your kube.netes_sd_configs is configured to look for endpoints , which are created by services. Do you have an endpoint created for your service? You could check with kubectl get endpoints in your namespace. If you don't want to create a service, I suppose you could configure Prometheus to scrape pod targets too, check the docs for more info.

Also the documentation for metrics and labels say the metric name must match the regular expression [a-zA-Z_:][a-zA-Z0-9_:]* , so the dash ( - ) in your metric name might be an issue too.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM