[英]Deployed prometheus with Django and Kubernetes, how to make it scrape the Django app?
I have a Django project deployed in Kubernetes and I am trying to deploy Prometheus as a monitoring tool.我在 Kubernetes 中部署了一个 Django 项目,我正在尝试将 Prometheus 部署为监控工具。 I have successfully done all the steps needed to include django_prometheus
in the project and locally I can go go localhost:9090
and play around with querying the metrics.我已经成功完成了在项目中包含django_prometheus
所需的所有步骤,并且在本地我可以去localhost:9090
并查询指标。
I have also deployed Prometheus to my Kubernetes cluster and upon running a kubectl port-forward ...
on the Prometheus pod I can see some metrics of my Kubernetes resources.我还将 Prometheus 部署到我的 Kubernetes 集群,并在运行kubectl port-forward ...
在 Prometheus pod 上时,我可以看到我的 Kubernetes 资源的一些指标。
Where I am a bit confused is how to make the deployed Django app metrics available on the Prometheus dashboard just like the others.我有点困惑的是如何像其他人一样在 Prometheus 仪表板上提供已部署的 Django 应用程序指标。 I deployed my app in default
namespace and prometheus in a monitoring
dedicated namespace.我将我的应用程序部署在default
命名空间中,并将 prometheus 部署在monitoring
专用命名空间中。 I am wondering what am I missing here.我想知道我在这里错过了什么。 Do I need to expose the ports on the service and deployment from 8000 to 8005 according to the number of workers or something like that?我是否需要根据工作人员的数量或类似的数量公开从 8000 到 8005 的服务和部署端口?
My Django app runs with gunicorn using supervisord
like so:我的 Django 应用程序使用supervisord
与 gunicorn 一起运行,如下所示:
[program:gunicorn]
command=gunicorn --reload --timeout 200000 --workers=5 --limit-request-line 0 --limit-request-fields 32768 --limit-request-field_size 0 --chdir /code/ my_app.wsgi
my_app
service: my_app
服务:apiVersion: v1
kind: Service
metadata:
name: my_app
namespace: default
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: my-app
sessionAffinity: None
type: ClusterIP
deployment.yaml
deployment.yaml
修剪版本apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-app
name: my-app-deployment
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: my-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: my-app
spec:
containers:
- image: ...
imagePullPolicy: IfNotPresent
name: my-app
ports:
- containerPort: 80
name: http
protocol: TCP
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
prometheus configmap
apiVersion: v1
data:
prometheus.rules: |-
... some rules
prometheus.yml: |-
global:
scrape_interval: 5s
evaluation_interval: 5s
rule_files:
- /etc/prometheus/prometheus.rules
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
- job_name: my-app
metrics_path: /metrics
static_configs:
- targets:
- localhost:8000
- job_name: 'node-exporter'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_endpoints_name]
regex: 'node-exporter'
action: keep
kind: ConfigMap
metadata:
labels:
name: prometheus-config
name: prometheus-config
namespace: monitoring
You do not have to expose services, if the promehteus is installed on the same cluster as your app.如果 promehteus 与您的应用程序安装在同一集群上,则您不必公开服务。 You can communicate with apps between namespaces by using Kubernetes DNS resolution, going by the rule:您可以使用 Kubernetes DNS 解析与命名空间之间的应用程序通信,遵循以下规则:
SERVICENAME.NAMESPACE.svc.cluster.local
so one way is to change your prometheus job target to something like this所以一种方法是将您的普罗米修斯工作目标更改为这样的
- job_name: speedtest-ookla
metrics_path: /metrics
static_configs:
- targets:
- 'my_app.default.svc.cluster.local:9000'
And this is the "manual" way.这是“手动”方式。 A better approach will be to use prometheus kubernetes_sd_config
.更好的方法是使用 prometheus kubernetes_sd_config
。 It will autodiscover your services and try to scrape them.它会自动发现您的服务并尝试抓取它们。
Reference: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config参考: https : //prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config
No need to expose the application outside the cluster.无需在集群外公开应用程序。
Leveraging the Kubernetes service discovery, add the job to scrape Services, Pods, or both:利用 Kubernetes 服务发现,添加作业以抓取服务、Pod 或两者:
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
regex: (.+)
- regex: __meta_kubernetes_service_label_(.+)
action: labelmap
- regex: 'app_kubernetes_io_(.+)'
action: labeldrop
- regex: 'helm_sh_(.+)'
action: labeldrop
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
regex: (.+)
- source_labels: [__meta_kubernetes_pod_node_name]
action: replace
target_label: host
regex: (.+)
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod
regex: (.+)
- regex: __meta_kubernetes_pod_label_(.+)
action: labelmap
- regex: 'app_kubernetes_io_(.+)'
action: labeldrop
- regex: 'helm_sh_(.+)'
action: labeldrop
Then, annotate the Service with:然后,使用以下注释对服务进行注释:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "80"
prometheus.io/path: "/metrics"
and the Deployment with:和部署:
spec:
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "80"
prometheus.io/path: "/metrics"
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.