简体   繁体   English

如何配置 kube-prometheus-stack helm 安装以抓取 Kubernetes 服务?

[英]How to configure kube-prometheus-stack helm installation to scrape a Kubernetes service?

I have installed kube-prometheus-stack as a dependency in my helm chart on a local docker for Mac Kubernetes cluster v1.19.7.我已经在本地 docker for Mac Kubernetes 集群 v1.19.7 的 helm 图表中安装了 kube-prometheus-stack 作为依赖项。 I can view the default prometheus targets provided by the kube-prometheus-stack.我可以查看 kube-prometheus-stack 提供的默认 prometheus 目标。

I have a python flask service that provides metrics which I can view successfully in the kubernetes cluster using kubectl port forward .我有一个 python flask 服务,它提供了我可以使用kubectl port forward在 kubernetes 集群中成功查看的指标。

However, I am unable to get these metrics displayed on the prometheus targets web interface.但是,我无法在 prometheus 目标 web 界面上显示这些指标。

The kube-prometheus-stack documentation states that Prometheus.io/scrape does not support annotation-based discovery of services. kube-prometheus-stack文档指出 Prometheus.io/scrape 不支持基于注释的服务发现。 Instead the the reader is referred to the concept of ServiceMonitors and PodMonitors .相反,读者可以参考ServiceMonitorsPodMonitors的概念。

So, I have configured my service as follows:因此,我将我的服务配置如下:

---
kind:                       Service
apiVersion:                 v1  
metadata:
  name:                     flask-api-service                    
  labels:
    app:                    flask-api-service
spec:
  ports:
    - protocol:             TCP 
      port:                 4444
      targetPort:           4444
      name:                 web 
  selector:
    app:                    flask-api-service                    
    tier:                   backend 
  type:                     ClusterIP
---
apiVersion:                 monitoring.coreos.com/v1
kind:                       ServiceMonitor
metadata:
  name:                     flask-api-service
spec:
  selector:
    matchLabels:
      app:                  flask-api-service
  endpoints:
  - port:                   web 

Subsequently, I have setup a port forward to view the metrics:随后,我设置了一个端口转发来查看指标:

Kubectl port-forward prometheus-flaskapi-kube-prometheus-s-prometheus-0 9090

Then visited prometheus web page at http://localhost:9090然后访问了prometheus web页面在http://localhost:9090

When I select the Status->Targets menu option, my flask-api-service is not displayed.当我 select 的 Status->Targets 菜单选项时,我的 flask-api-service 没有显示。

I know that the service is up and running and I have checked that I can view the metrics for a pod for my flask-api-service using kubectl port-forward <pod name> 4444 .我知道该服务已启动并正在运行,并且我已检查是否可以使用kubectl port-forward <pod name> 4444查看我的 flask-api-service 的 pod 指标。

Looking at a similar issue it looks as though there is a configuration value serviceMonitorSelectorNilUsesHelmValues that defaults to true.查看一个类似的问题,看起来好像有一个配置值serviceMonitorSelectorNilUsesHelmValues默认为 true。 Setting this to false makes the operator look outside it's release labels in helm??将此设置为 false 会使操作员在 helm 中查看其发布标签之外的内容?

I tried adding this to the values.yml of my helm chart in addition to the extraScrapeConfigs configuration value.除了extraScrapeConfigs配置值之外,我还尝试将它添加到我的掌舵图的values.yml中。 However, the flask-api-service still does not appear as an additional target on the prometheus web page when clicking the Status->Targets menu option.但是,当单击 Status->Targets 菜单选项时, flask-api-service仍然没有作为附加目标出现在 prometheus web 页面上。

prometheus:
  prometheusSpec:
    serviceMonitorSelectorNilUsesHelmValues: false
  extraScrapeConfigs: |
    - job_name: 'flaskapi'
    static_configs:
      - targets: ['flask-api-service:4444']

How do I get my flask-api-service recognised on the prometheus targets page at http://localhost:9090 ?如何在http://localhost:9090的 prometheus 目标页面上识别我的flask-api-service

I am installing Kube-Prometheus-Stack as a dependency via my helm chart with default values as shown below:我正在通过我的 helm 图表将 Kube-Prometheus-Stack 作为依赖项安装,默认值如下所示:

Chart.yaml图.yaml

apiVersion: v2
appVersion: "0.0.1"
description: A Helm chart for flaskapi deployment
name: flaskapi
version: 0.0.1
dependencies:
- name: kube-prometheus-stack
  version: "14.4.0"
  repository: "https://prometheus-community.github.io/helm-charts"
- name: ingress-nginx
  version: "3.25.0"
  repository: "https://kubernetes.github.io/ingress-nginx"
- name: redis
  version: "12.9.0"
  repository: "https://charts.bitnami.com/bitnami"

Values.yaml值。yaml

docker_image_tag: dcs3spp/
hostname: flaskapi-service
redis_host: flaskapi-redis-master.default.svc.cluster.local 
redis_port: "6379"

prometheus:
  prometheusSpec:
    serviceMonitorSelectorNilUsesHelmValues: false
  extraScrapeConfigs: |
    - job_name: 'flaskapi'
    static_configs:
      - targets: ['flask-api-service:4444']

Prometheus custom resource definition has a field called serviceMonitorSelector . Prometheus 自定义资源定义有一个名为serviceMonitorSelector的字段。 Prometheus only listens to those matched serviceMonitor. Prometheus 只监听那些匹配的 serviceMonitor。 In case of helm deployment it is your release name.在 helm 部署的情况下,它是您的发布名称。

release: {{ $.Release.Name | quote }}

So adding this field in your serviceMonitor should solve the issue.所以在你的 serviceMonitor 中添加这个字段应该可以解决这个问题。 Then you serviceMonitor manifest file will be:然后你的 serviceMonitor 清单文件将是:


apiVersion:                 monitoring.coreos.com/v1
kind:                       ServiceMonitor
metadata:
  name:                     flask-api-service
  labels:
      release: <your_helm_realese_name_>
spec:
  selector:
    matchLabels:
      app:                  flask-api-service
  endpoints:
  - port:                   web 

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何配置 istio helm chart 以使用外部 kube-prometheus-stack? - How to configure istio helm chart to use external kube-prometheus-stack? 如何覆盖 kube-prometheus-stack helm chart 中的 alertmanager 配置 - How to overwrite alertmanager configuration in kube-prometheus-stack helm chart 在 helm chart kube-prometheus-stack 部署中添加自定义抓取端点 - Add custom scrape endpoints in helm chart kube-prometheus-stack deployment Kube-Prometheus-Stack Helm Chart v14.40:在 macOS Catalina 10.15.7 上的 Docker 中,节点导出器和抓取目标不健康 Kubernetes 集群 - Kube-Prometheus-Stack Helm Chart v14.40 : Node-exporter and scrape targets unhealthy in Docker For Mac Kubernetes Cluster on macOS Catalina 10.15.7 如何增加 kube-prometheus-stack startupProbe? - how to increase kube-prometheus-stack startupProbe? 如何使用 helm bitnami/mongodb 和 kube-prometheus-stack 设置 mongodb grafana 仪表板 - How to setup a mongodb grafana dashboard using helm bitnami/mongodb and kube-prometheus-stack kube-prometheus-stack 升级 Prometheus 版本 - kube-prometheus-stack Upgrade Prometheus Version kube-prometheus-stack - Grafana 中的新仪表板 - kube-prometheus-stack - New Dashboard in Grafana kube-prometheus-stack 问题抓取指标 - kube-prometheus-stack issue scraping metrics 如何在 kube-prometheus-stack 中添加 Grafana 的预配置通知通道? - How to add Grafana's provisioned notification channel in kube-prometheus-stack?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM