简体   繁体   English

配置 Prometheus 监控多个微服务

[英]Configure Prometheus for monitoring multiple microservices

I want to monitor a Spring Boot Microservices application running on Docker-Compose with about 20 microservices with Prometheus and Grafana .我想监控在Docker-Compose上运行的Spring Boot微服务应用程序,其中包含大约 20 个带有PrometheusGrafana微服务。

What is the best approach:最好的方法是什么:
1- Having one job with multiple targets for each microservice? 1- 每个微服务都有一个有多个目标的工作?

scrape_configs:
  - job_name: 'services-job'
    metrics_path: '/actuator/prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['service-one:8080']
        labels:
          group: 'service-one' 
      - targets: ['service-two:8081']
        labels:
          group: 'service-two' 

2- Having multiple jobs with single target for each service? 2- 每个服务都有多个工作和单一目标?

scrape_configs:
  - job_name: 'service-one-job'
    metrics_path: '/actuator/prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['service-one:8080']
        labels:
          group: 'service-one'
  - job_name: 'service-two-job'
    metrics_path: '/actuator/prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['service-two:8081']
        labels:
          group: 'service-two'  
 

The way you group your targets by job has nothing to do with the number of endpoints to scrape.您按作业对目标进行分组的方式与要抓取的端点数量无关。

You need to group all the targets with the same purpose in the same job.您需要将同一作业中具有相同目的的所有目标分组。 That's exactly what the documentation says :这正是文档所说的:

A collection of instances with the same purpose, a process replicated for scalability or reliability for example, is called a job.具有相同目的的实例集合,例如为了可伸缩性或可靠性而复制的过程,称为作业。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM