简体   繁体   English

如何为运行多个 docker 网络的服务配置“全局”prometheus

[英]How configure a "global" prometheus for services running multiple docker network

I have a "global" network where Prometheus is running, then there are multiple mini networks where different microservices run.我有一个运行 Prometheus 的“全球”网络,然后有多个运行不同微服务的迷你网络。 I'm trying to scrape the metrics from the microservices (under those multiple mini networks) without adding all the mini-networks to the "global" network.我正在尝试从微服务(在那些多个迷你网络下)中抓取指标,而不将所有迷你网络添加到“全球”网络中。

I checked How to configure Prometheus in a multi-location scenario?我检查了如何在多位置场景中配置 Prometheus? but I don't think it's the right way for my scenario.但我认为这不是我的方案的正确方法。

Running on Docker Swarm we can take advantage of Docker Swarms mesh networking to achieve this.在 Docker Swarm 上运行,我们可以利用 Docker Swarm 网状网络来实现这一点。

First, you need some things - a master prometheus stack that defines a global network that this prometheus is attached to.首先,您需要一些东西 - 一个主 prometheus 堆栈,它定义了这个 prometheus 所连接的全局网络。

master-prometheus-stack.yml

networks:
  prometheus:
    name: prometheus
    driver: overlay
    attachable: true

configs:
  prometheus-1:
    file: prometheus-master.yml

services:
  prometheus:
    image: prom/prometheus
    networks:
    - prometheus
    configs:
      source: prometheus-1
      target: /etc/prometheus/prometheus.yml

This prometheus instance is configured with a single job: Its looking for the dnsrr record tasks.prometheus.scrape for child prometheus instances, and scraping everything from their federate entrypoint.这个 prometheus 实例配置了一个作业:它在 dnsrr 记录tasks.prometheus.scrape中寻找子 prometheus 实例,并从它们的federate入口点抓取所有内容。

prometheus-master.yml

scrape_configs:
  - job_name: federate-prometheus
    honor_labels: true
    metrics_path: '/federate'
    params:
      'match[]':
        - '{job=~".*"}'
    dns_sd_configs:
    - names: [ tasks.prometheus.scrape ]
      type: 'A'
      port: 9090

Then, the next part of this cunning plan is simple: Each microservice stack deploys with its own private prometheus instance.然后,这个狡猾的计划的下一部分很简单:每个微服务堆栈都部署有自己的私有 prometheus 实例。

stack.yml

networks:
  stack:
  prometheus_public:
    name: prometheus
    external: true

services:
   other-services:
     ...

   prometheus:
     image: prom/prometheus:
     networks:
       stack:
       prometheus_public:
         aliases: ["prometheus.scrape"]
       configs:
       - source: prometheus-1
         target: /etc/prometheus/prometheus.yml

This prometheus instance should have a prometheus config file with all the scrape jobs required for each microservice in the stack.这个 prometheus 实例应该有一个 prometheus 配置文件,其中包含堆栈中每个微服务所需的所有抓取作业。 The prometheus instance is attached to the microservice network so it can reach all the microservices to scrape them. prometheus 实例附加到微服务网络,因此它可以访问所有微服务以抓取它们。 It is also attached to the public prometheus network, isolating the rest of the stack from this connection.它还连接到公共 prometheus 网络,将堆栈的其余部分与此连接隔离。 To advertize the fact that it wants to be scraped, we add a specific alias on that network "prometheus.scrape", which ensures that docker swarm adds the ip for this prometheus to the dnsrr record tasks.prometheus.scrape on that network.为了宣传它想要被抓取的事实,我们在该网络上添加了一个特定别名“prometheus.scrape”,以确保 docker swarm 将此 prometheus 的 ip 添加到该网络上的dnsrr记录tasks.prometheus.scrape中。

Now, because dns_sd_configs is dynamically probed, as microservices, each with their own prometheus instance, are deployed they add their own "prometheus.scrape" aliases to the common network, and the main prometheus instance will discover them and incorporate their metrics from their /federate endpoints automatically.现在,由于dns_sd_configs是动态探测的,当部署了微服务时,每个微服务都有自己的 prometheus 实例,它们将自己的“prometheus.scrape”别名添加到公共网络,主 prometheus 实例将发现它们并从它们的 /自动联合端点。 Conversely, removing a stack cleanly de-registers the extra instances.相反,删除堆栈干净地取消注册额外的实例。

Separating the jobs top actually generate discreet dashboards, perhaps by adding extra tags identifying the source stack, remain an exercise for the reader.将作业分离到顶部实际上会生成谨慎的仪表板,也许是通过添加额外的标签来标识源堆栈,这仍然是读者的练习。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM