简体   繁体   中英

How do you add scrape targets to a Prometheus server that was installed with Kubernetes-Helm?

Background

I have installed Prometheus on my Kubernetes cluster (hosted on Google Container Engineer) using the Helm chart for Prometheus .

The Problem

I cannot figure out how to add scrape targets to the Prometheus server. The prometheus.io site describes how I can mount a prometheus.yml file (which contains a list of scrape targets) to a Prometheus Docker container -- I have done this locally and it works. However, I don't know how to specify scrape targets for a Prometheus setup installed via Kubernetes-Helm. Do I need to add a volume to the Prometheus server pod that contains the scrape targets, and therefore update the YAML files generated by Helm??

I am also not clear on how to expose metrics in a Kubernetes Pod -- do I need to forward a particular port?

You need to add annotations to the service you want to monitor.

apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'

From the prometheus.yml in the chart:

  • prometheus.io/scrape : Only scrape services that have a value of true
  • prometheus.io/scheme : http or https
  • prometheus.io/path : override if the metrics path is not /metrics
  • prometheus.io/port : If the metrics are exposed on a different port

And yes you need to expose the port with metrics to the service so Prometheus could access it

First of all, you need to create a Service Monitor, which is a custom K8s resource. Just create a servicemonitor.yaml in the manifests folder.

Since when we are deploying on K8s, we don't have access to the Prometheus.yaml file to mention the targets, we create the servicemonitor, which in-turn adds the target to the scrap_config in the Prometheus.yaml file. You can read about it more fromhere .

This is a sample servicemonitor.yaml file for exposing Flask App metrics in Prometheus.

apiVersion: monitoring.coreos.com/v1 
kind: ServiceMonitor 
metadata:
  name: flask-metrics
  namespace: prometheus # namespace where prometheus is running
  labels:
    app: flask-app
    release: prom  # name of the release 
    # ( VERY IMPORTANT: You need to know the correct release name by viewing 
    # the servicemonitor of Prometheus itself: Without the correct name, 
    #  Prometheus cannot identify the metrics of the Flask app as the target.)
spec:
  selector:
    matchLabels:
      # Target app service
      app: flask-app # same as above
      release: prom # same as above
  endpoints:
  - interval: 15s # scrape interval
    path: /metrics # path to scrape
    port: http # named port in target app
  namespaceSelector:
    matchNames:
    - flask # namespace where the app is running

Also add this Release Label to the Services and Deployments file too, in the metadata and spec section.

If you encounter a situation where Prometheus is showing the Target but not the endpoints, take a look at this: https://github.com/prometheus-operator/prometheus-operator/issues/3053

Some useful links:

I have concluded my 12-hour research in this answer. Please Upvote the answer if you find it useful.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM