[英]Set up Prometheus with docker-compose to get metrics of existing Kubernetes pods
I have prometheus config that works on my cluster deployed by terraform.我的 prometheus 配置适用于由 terraform 部署的集群。 Now, I would like to setup Prometheus using the same prometheus.yml locally (outside terraform).
现在,我想在本地(在 terraform 之外)使用相同的 prometheus.yml 设置 Prometheus。 I created a new project to set up Promethese using docker-compose and use the same prometheus.yml file but when I go to prometheus site, it seems like the metrics for kubernetes is not available, such as these metrics about kubernetes containers: container_cpu_usage_seconds_total container_cpu_load_average_10s container_memory_usage_bytes container_memory_rss
I created a new project to set up Promethese using docker-compose and use the same prometheus.yml file but when I go to prometheus site, it seems like the metrics for kubernetes is not available, such as these metrics about kubernetes containers: container_cpu_usage_seconds_total container_cpu_load_average_10s container_memory_usage_bytes container_memory_rss
Could you please let me know what I am missing in my project to make this work?你能告诉我我的项目中缺少什么来完成这项工作吗?
This is prometheus.yml
这是
prometheus.yml
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'kube-state-metrics'
static_configs:
- targets: ['10.36.1.10']
- targets: ['10.36.2.6']
- targets: ['10.36.1.12']
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- api_server: https://10.36.1.10:6443
role: node
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
insecure_skip_verify: true
- api_server: https://10.36.2.6:6443
role: node
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
insecure_skip_verify: true
- api_server: https://10.36.1.12:6443
role: node
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
insecure_skip_verify: true
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_label_component]
action: replace
target_label: job
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
And, this is docker-compose.yml
而且,这是
docker-compose.yml
version: '3'
services:
prometheus:
image: prom/prometheus:v2.21.0
ports:
- 9000:9090
volumes:
- ./prometheus:/etc/prometheus
- prometheus-data:/prometheus
command: --web.enable-lifecycle --config.file=/etc/prometheus/prometheus.yml
grafana:
image: grafana/grafana:$GRAFANA_VERSION
environment:
GF_SECURITY_ADMIN_USER: $GRAFANA_ADMIN_USER
GF_SECURITY_ADMIN_PASSWORD: $GRAFANA_ADMIN_PASSWORD
ports:
- 3000:3000
volumes:
- grafana-storage:/var/lib/grafana
depends_on:
- prometheus
networks:
- internal
networks:
internal:
volumes:
prometheus-data:
grafana-storage:
You are running the Prometheus at the local and server level both.您正在本地和服务器级别运行Prometheus 。
server one working fine and getting the metrics of Kubernetes containers as it's on kubernetes.服务器一工作正常并获得 Kubernetes 容器的指标,因为它位于 kubernetes 上。
while docker-compose one not working due to you are running it locally on docker not on kubernetes cluster.而 docker-compose 由于您在 docker 上而不是在 kubernetes 集群上本地运行而无法正常工作。
it's the issue of the Target your Prometheus is not getting the metrics of your Kubernetes cluster.这是目标的问题,您的 Prometheus 没有获得您的Kubernetes集群的指标。
for example, you are running the Prometheus locally but want to monitor the external Kubernetes cluster you have to expose your Kube-state-metrics service using IP .例如,您在本地运行Prometheus ,但想要监控外部Kubernetes集群,您必须使用IP公开您的Kube-state-metrics服务。
in that case your Prometheus config will have job like在这种情况下,您的 Prometheus 配置将具有类似的工作
scrape_configs:
- job_name: 'kube-state-metrics'
static_configs:
- targets: ['address'] //address of the k8s service IP
in your case you have to do something like在你的情况下,你必须做类似的事情
kubernetes_sd_configs:
- api_server: https://<ip>:6443
role: node
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
insecure_skip_verify: true
this will fetch the metrics of kubernetes cluster and you can see data locally.这将获取 kubernetes 集群的指标,您可以在本地查看数据。
you can read this nice gist: https://gist.github.com/sacreman/b61266d2ec52cf3a1af7c278d9d93450您可以阅读这个不错的要点: https://gist.github.com/sacreman/b61266d2ec52cf3a1af7c278d9d93450
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.