简体   繁体   English

dockerswarm 容器链接:“tasks.service1”与直接“service1”

[英]dockerswarm container linking : "tasks.service1" vs directly "service1"

When using a docker swarm, what is the difference between using "tasks.service1" or directly "service1" when curling/pinging?当使用 docker swarm 时,在 curl/pinging 时使用“tasks.service1”或直接使用“service1”有什么区别?

Practical exemple: i start a docker swarm with a service on the same overlay network as follow:实际示例:我在同一覆盖网络上启动了一个带有服务的 docker 群,如下所示:

$> docker network create --driver=overlay public
$> docker service create --name service1 --replicas=2 --network public ubuntu sleep 10000

now i list containers:现在我列出容器:

$> docker ps -a
bd645378cb2d   ubuntu:latest   "sleep 10000"   43 seconds ago   Up 41 seconds             service1.1.rjy91s66col81libdilrd698j
686c0ab006fc   ubuntu:latest   "sleep 10000"   43 seconds ago   Up 41 seconds             service1.2.wjrwsj6h6rcadsknzxym4h9w0

if i attach to the first container i can ping the others:如果我附加到第一个容器,我可以 ping 其他容器:

$> docker exec -ti bd645378cb2d bash
$> apt update
$> apt install iputils-ping dnsutils
$> ping service1 # returns ok 10.0.1.65

when i dig the special hostname tasks.service1 i get all replicas ips.当我挖掘特殊主机名tasks.service1时,我得到所有副本ips。 but theses do not match with the one i get with ping.但这些与我通过 ping 得到的不匹配。

$> dig tasks.service1 # returns 10.0.1.66 & 10.0.1.67

Why do ips doesn't match?为什么ips不匹配? if i need to connect to service 2 from service 1, should i use tasks.service2 or service2?如果我需要从服务 1 连接到服务 2,我应该使用 tasks.service2 还是 service2?

This is a load-balancer IP.这是一个负载均衡器 IP。 Docker service has it if created with default vip (Virtual IP) --endpoint-mode .如果使用默认vip (虚拟 IP) --endpoint-mode创建 Docker 服务,则该服务具有它。 You can see it with docker inspect <service_name> :您可以使用docker inspect <service_name>来查看它:

"Endpoint": {
    "Spec": {
        "Mode": "vip"
    },
    "VirtualIPs": [
        {
            "NetworkID": "i7k7pv9s4v7dgvc57zjmh6pk6",
            "Addr": "10.0.1.2/24"
        }
    ]
}

This is mentioned in the documentation , although it is easy to miss the point: 文档中提到了这一点,尽管很容易忽略这一点:

To use an external load balancer without the routing mesh, set --endpoint-mode to dnsrr instead of the default value of vip .要使用没有路由网格的外部负载均衡器,请将 --endpoint-mode 设置为dnsrr而不是vip的默认值。 In this case, there is not a single virtual IP.在这种情况下,没有一个虚拟 IP。

The tasks.service1 name will resolve to the IP addresses of each individual container in the service. tasks.service1名称将解析为服务中每个单独容器的 IP 地址。 This can be useful if you need to reference individual replicas.如果您需要引用单个副本,这可能很有用。

However, there's a downside.但是,有一个缺点。 DNS caches in most OSs and applications. DNS 缓存在大多数操作系统和应用程序中。 That means during an update to your service, the stale DNS resolution may point to an IP that is no longer reachable, or could point to an entirely different container that was recently started with a recycled IP.这意味着在更新您的服务期间,陈旧的 DNS 分辨率可能指向不再可访问的 IP,或者可能指向最近使用回收的 ZA12A3079E14CED46E69BA52B8A90B2 启动的完全不同的容器。

To handle this in Swarm Mode, a virtual IP (VIP) is the default when resolving service1 .为了在 Swarm 模式下处理此问题,解析service1时默认使用虚拟 IP (VIP)。 This is dynamically updated as replicas are created/deleted without the issue of DNS caching.这会随着副本的创建/删除而动态更新,而不会出现 DNS 缓存问题。 It performs a round robin load balancing on each new connection.它在每个新连接上执行循环负载平衡。

Side note, this is also used by the ingress on published ports.旁注,这也被发布端口上的入口使用。 Which means you could end up with an extra network hop if you have an external LB pointing to the cluster, a port published on ingress, to a globally scheduled service.这意味着如果您有一个指向集群的外部 LB(在入口上发布的端口)到全局调度服务,那么您最终可能会获得额外的网络跃点。 In those cases, I typically bypass the VIP, publish the port in host mode, and let the external LB direct requests to different nodes in the cluster.在这些情况下,我通常会绕过 VIP,以主机模式发布端口,并让外部 LB 将请求定向到集群中的不同节点。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM