简体   繁体   English

Docker Swarm:绕过负载均衡器并直接向特定容器发出请求

[英]Docker Swarm: bypass load balancer and make direct request to specific containers

I have two containers running in a swarm.我有两个容器在一个集群中运行。 Each exposes a /stats endpoint which I am trying to scrape.每个都暴露了我试图抓取的/stats端点。

However, using the swarm port obviously results in the queries being load balanced and therefore the stats are all intermingled:但是,使用 swarm 端口显然会导致查询被负载平衡,因此统计信息都是混合的:

+--------------------------------------------------+
|                       Server                     |
|    +-------------+             +-------------+   |
|    |             |             |             |   |
|    | Container A |             | Container B |   |
|    |             |             |             |   |
|    +-------------+             +-------------+   |
|                 \              /                 |
|                  \            /                  |
|                 +--------------+                 |
|                 |              |                 |
|                 | Swarm Router |                 |
|                 |              |                 |
|                 +--------------+                 |
|                         v                        |
+-------------------------|------------------------+
                          |                         
                       A Stats                      
                       B Stats                      
                       A Stats                      
                       B Stats                      
                          |                         
                          v                          

I want to keep the load balancer for application requests, but also need a direct way to make requests to each container to scrape the stats.我想为应用程序请求保留负载均衡器,但还需要一种直接的方式来向每个容器发出请求以抓取统计信息。

+--------------------------------------------------+
|                       Server                     |
|    +-------------+             +-------------+   |
|    |             |             |             |   |
|    | Container A |             | Container B |   |
|    |             |             |             |   |
|    +-------------+             +-------------+   |
|        |        \              /         |       |
|        |         \            /          |       |
|        |        +--------------+         |       |
|        |        |              |         |       |
|        |        | Swarm Router |         |       |
|        v        |              |         v       |
|        |        +--------------+         |       |
|        |                |                |       |
+--------|----------------|----------------|-------+
         |                |                |
      A Stats             |             B Stats
      A Stats       Normal Traffic      B Stats
      A Stats             |             B Stats
         |                |                |
         |                |                |
         v                |                v

A dynamic solution would be ideal, but since I don't intend to do any dynamic scaling something like hardcoded ports for each container would be fine:动态解决方案将是理想的,但由于我不打算对每个容器进行任何动态缩放,例如硬编码端口之类的就可以了:

::8080  Both containers via load balancer
::8081  Direct access to container A
::8082  Direct access to container B

Can this be done with swarm?这可以用 swarm 完成吗?

From inside an overlay network you can get IP-addresses of all replicas with tasks.<service_name> DNS query:从覆盖网络内部,您可以获取所有具有tasks.<service_name> DNS 查询:

; <<>> DiG 9.11.5-P4-5.1+deb10u5-Debian <<>> -tA tasks.foo_test
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19860
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;tasks.foo_test.            IN  A

;; ANSWER SECTION:
tasks.foo_test.     600 IN  A   10.0.1.3
tasks.foo_test.     600 IN  A   10.0.1.5
tasks.foo_test.     600 IN  A   10.0.1.6

This is mentioned in the documentation .这在文档中有所提及。

Also, if you use Prometheus to scrape those endpoints for metrics, you can combine the above with dns_sd_configs to set the targets to scrape ( here is an article how).此外,如果您使用Prometheus来抓取这些端点以获取指标,您可以将上述内容与dns_sd_configs结合以将目标设置为抓取( 是一篇文章如何)。 This is easy to get running but somewhat limited in features (especially in large environments).这很容易运行,但功能有些受限(尤其是在大型环境中)。

A more advanced way to achieve the same is to use dockerswarm_sd_config ( docs , example configuration ).实现相同目的的更高级方法是使用dockerswarm_sd_config文档示例配置)。 This way the list of endpoints will be gathered by querying Docker daemon, along with some useful labels (ie node name, service name, custom labels).这样,端点列表将通过查询 Docker 守护程序以及一些有用的标签(即节点名称、服务名称、自定义标签)来收集。

While less than ideal, you can introduce a microservice that acts as an intermediary to the other containers that are exposing /stats .虽然不太理想,但您可以引入一个微服务,作为暴露/stats的其他容器的中介。 This microservice would have to be configured with the individual endpoints and operate in the same network as said endpoints.该微服务必须配置有单独的端点,并在与所述端点相同的网络中运行。

This doesn't bypass the load balancer, but instead makes it so it does not matter.这不会绕过负载均衡器,而是让它变得无关紧要。

The intermediary could roll-up the information or you could make it more sophisticated by passing a list of opaque identifiers which the caller can then use to individually query the intermediary.中介可以汇总信息,或者您可以通过传递不透明标识符列表来使其更复杂,然后调用者可以使用这些标识符单独查询中介。

It is slightly "anti-pattern" in the sense that you have a highly coupled "stats" proxy that must be configured to be able to hit each endpoint.从某种意义上说,它有点“反模式”,因为您有一个高度耦合的“统计”代理,必须将其配置为能够访问每个端点。

That said, it is good in the sense that you don't have to expose individual containers outside of the proxy.也就是说,从某种意义上说,您不必在代理之外公开单个容器,这很好。 From a security perspective, this may be better because you're not leaking additional information out of your swarm.从安全角度来看,这可能会更好,因为您不会从集群中泄露额外的信息。

You can try to publish a specific container port on a host machine,add to your services:您可以尝试在主机上发布特定的容器端口,添加到您的服务中:

    ports:
      - target: 8081
        published: 8081
        protocol: tcp
        mode: host

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM