繁体   English   中英

配置 Prometheus 以抓取集群中的所有 pod

[英]Configure Prometheus to scrape all pods in a cluster

嘿,我目前正在尝试配置 prometheus 代理的抓取配置以收集集群中的所有 pod,我真正关心的是现在跟踪 cpu 和内存,但其他指标不会受到伤害。 我可以从集群中获取 kubernetes 资源和 prometheus 相关指标,但我无法从正在运行的测试 pod 中获取任何指标(它是一个基本的节点 js express 应用程序)

此外,我想知道每个 pod 是否需要将指标导出到 prometheus 以获取 cpu/内存信息,或者这是否应该由节点上运行的 kubelet 覆盖?

任何信息都会有所帮助,以下是我到目前为止所做的配置和一些调试。

我指定了以下抓取配置:

     remote_write:
          - url: http://xxxx.us-east-1.elb.amazonaws.com/

      scrape_configs:
          - job_name: 'kubernetes-pods'

            kubernetes_sd_configs:
                - role: pod
                  api_server: https://kubernetes.default.svc
                  tls_config:
                      insecure_skip_verify: true
                  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            relabel_configs:
                - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
                  action: replace
                  target_label: __metrics_path__
                  regex: (.+)
                - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
                  action: replace
                  regex: (.+):(?:\d+);(\d+)
                  replacement: ${1}:${2}
                  target_label: __address__
                - action: labelmap
                  regex: __meta_kubernetes_pod_label_(.+)
                - source_labels: [__meta_kubernetes_namespace]
                  action: replace
                  target_label: kubernetes_namespace
                - source_labels: [__meta_kubernetes_pod_name]
                  action: replace
                  target_label: kubernetes_pod_name

          - job_name: 'kubernetes-kubelet'
            scheme: https
            tls_config:
                insecure_skip_verify: true
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            kubernetes_sd_configs:
                - role: node
                  api_server: https://kubernetes.default.svc
                  tls_config:
                      insecure_skip_verify: true
                  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            relabel_configs:
                - action: labelmap
                  regex: __meta_kubernetes_node_label_(.+)
                - source_labels: [__meta_kubernetes_node_name]
                  regex: (.+)
                  target_label: __metrics_path__
                  replacement: /api/v1/nodes/${1}/proxy/metrics

          - job_name: 'kubernetes-cadvisor'
            scheme: https
            tls_config:
                insecure_skip_verify: true
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            kubernetes_sd_configs:
                - role: node
                  api_server: https://kubernetes.default.svc
                  tls_config:
                      insecure_skip_verify: true
                  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            relabel_configs:
                - action: labelmap
                  regex: __meta_kubernetes_node_label_(.+)
                - source_labels: [__meta_kubernetes_node_name]
                  regex: (.+)
                  target_label: __metrics_path__
                  replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

运行kubectl logs prometheus_pod_name时,我没有看到任何错误:

ts=2022-07-12T23:12:49.302Z caller=main.go:491 level=info msg="No time or size retention was set so using the default time retention" duration=15d
ts=2022-07-12T23:12:49.303Z caller=main.go:535 level=info msg="Starting Prometheus Server" mode=server version="(version=2.36.2, branch=HEAD, revision=d7e7b8e04b5ecdc1dd153534ba376a622b72741b)"
ts=2022-07-12T23:12:49.303Z caller=main.go:540 level=info build_context="(go=go1.18.3, user=root@f051ce0d6050, date=20220620-13:21:35)"
ts=2022-07-12T23:12:49.303Z caller=main.go:541 level=info host_details="(Linux 5.4.196-108.356.amzn2.x86_64 #1 SMP Thu May 26 12:49:47 UTC 2022 x86_64 prometheus-5bbc9d5cf9-hrmbr (none))"
ts=2022-07-12T23:12:49.303Z caller=main.go:542 level=info fd_limits="(soft=1048576, hard=1048576)"
ts=2022-07-12T23:12:49.303Z caller=main.go:543 level=info vm_limits="(soft=unlimited, hard=unlimited)"
ts=2022-07-12T23:12:49.307Z caller=web.go:553 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090
ts=2022-07-12T23:12:49.308Z caller=main.go:972 level=info msg="Starting TSDB ..."
ts=2022-07-12T23:12:49.309Z caller=tls_config.go:195 level=info component=web msg="TLS is disabled." http2=false
ts=2022-07-12T23:12:49.311Z caller=head.go:493 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
ts=2022-07-12T23:12:49.311Z caller=head.go:536 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.925µs
ts=2022-07-12T23:12:49.311Z caller=head.go:542 level=info component=tsdb msg="Replaying WAL, this may take a while"
ts=2022-07-12T23:12:49.311Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
ts=2022-07-12T23:12:49.311Z caller=head.go:619 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=24.899µs wal_replay_duration=267.82µs total_replay_duration=321.491µs
ts=2022-07-12T23:12:49.313Z caller=main.go:993 level=info fs_type=XFS_SUPER_MAGIC
ts=2022-07-12T23:12:49.313Z caller=main.go:996 level=info msg="TSDB started"
ts=2022-07-12T23:12:49.313Z caller=main.go:1177 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
ts=2022-07-12T23:12:49.315Z caller=dedupe.go:112 component=remote level=info remote_name=8ffa18 url=http://xxxx.elb.amazonaws.com/ msg="Starting WAL watcher" queue=8ffa18
ts=2022-07-12T23:12:49.315Z caller=dedupe.go:112 component=remote level=info remote_name=8ffa18 url=http://xxxx.elb.amazonaws.com/ msg="Starting scraped metadata watcher"
ts=2022-07-12T23:12:49.316Z caller=main.go:1214 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=3.904525ms db_storage=866ns remote_storage=1.510606ms web_handler=314ns query_engine=762ns scrape=251.061µs scrape_sd=663.267µs notify=1.042µs notify_sd=2.62µs rules=1.523µs tracing=4.328µs
ts=2022-07-12T23:12:49.317Z caller=main.go:957 level=info msg="Server is ready to receive web requests."
ts=2022-07-12T23:12:49.318Z caller=dedupe.go:112 component=remote level=info remote_name=8ffa18 url=http://xxxx.elb.amazonaws.com/ msg="Replaying WAL" queue=8ffa18
ts=2022-07-12T23:12:49.318Z caller=manager.go:937 level=info component="rule manager" msg="Starting rule manager..."
ts=2022-07-12T23:12:56.818Z caller=dedupe.go:112 component=remote level=info remote_name=8ffa18 url=http://xxxx.us-east-1.elb.amazonaws.com/ msg="Done replaying WAL" duration=7.500419538s

对于当前正在运行的 pod(如果有帮助):

❯ kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       paigo-agent-transformer-58fc696d66-nz6z2   1/1     Running   0          18h
default       prometheus-5bbc9d5cf9-hrmbr                1/1     Running   0          13m
kube-system   aws-node-97f6p                             1/1     Running   0          5d6h
kube-system   aws-node-lnb4g                             1/1     Running   0          5d6h
kube-system   aws-node-m7dsb                             1/1     Running   0          5d6h
kube-system   coredns-7f5998f4c-25f92                    1/1     Running   0          5d6h
kube-system   coredns-7f5998f4c-jdtbk                    1/1     Running   0          5d6h
kube-system   kube-proxy-2f97k                           1/1     Running   0          5d6h
kube-system   kube-proxy-flgw7                           1/1     Running   0          5d6h
kube-system   kube-proxy-hw2rr                           1/1     Running   0          5d6h
kube-system   metrics-server-64cf6869bd-x4xgb            1/1     Running   0          5h58m

我还确认数据也被正确发送到远程端点。

到目前为止我读到的东西: 如何为 prometheus 发现 pod 以刮取 Prometheus 自动发现 K8s

我认为很可能有一些明显的东西我错过了,我只是不知道如何调试它。

这些指标通常来自 kube-state-metrics,它包含在 Prometheus-operator/kube-prometheus-stack helm 图表中。 将它安装到集群中后,您将拥有一个像这样的 pod:

prom-mfcloud-kube-state-metrics-7d947c8c5c-4rgz6         1/1     Running   2 (4d21h ago)   4d21h
  • kube-state-metrics: https ://github.com/kubernetes/kube-state-metrics

  • kube-prometheus-stack helm 图表: https ://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM