简体   繁体   中英

Need for service discovery for docker engine swarm mode

I'm confused about docker swarm. As far as I know, the old way to run a swarm was to run manager and workers in containers, before docker engine provided native support for swarm mode. Documentation for the old, containerized swarm explained how to setup service discovery using consul, etcd or zookeeper. Service discovery is necessary, as services are ran at random ports to avoid collisions, right?

Documentation for the docker engine swarm mode doesn't explain how to setup service discovery. Now I'm confused, if the mechanism is included in swarm mode, or is the documentation incomplete.

Where I can find clear, up-to-date explanation of swarm mode, and how it relates to concepts like service discovery?

Indeed, since docker 1.12, docker swarm mode implements it's own service discovery capabilities.

On a single host setup (testing)

To look into it, and for example it's load balancing capabilities, you can do the following :

#Setup your docker engine as a docker swarm manager
docker swarm init
#Create an nginx service
docker service create --name nginx --publish 80:80 nginx

Now you can list services using docker service ls , and see that you have an nginx service. If you do a docker ps , you'll see that your container is not exposing any ports directly to the machine, but if you try to inspect your service, the port is indeed exposed as a service port. So to access your container, you'll need to connect to the docker swarm manager's address, and your published port. Here since your machine is the manager, you'll need to access localhost:80 , or your $DOCKER_HOST:80 if using docker-machine or equivalent

> docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS               NAMES
7f9d93dbbce5        nginx:latest        "nginx -g 'daemon off"   About a minute ago   Up About a minute   80/tcp, 443/tcp     nginx.1.4zr3zacuw06ax9swuit4wbacd
> curl -X GET localhost:80
# Result showing nginx stuff

If you want to refer to the documentation, you can have a lot of information on the swarm key concept page and on the swarm mode routing mesh page

On a multi host setup

If you are running a multi host setup, like you would in a normal usage of swarm mode you would have at least two docker engines running in swarm mode : one as a worker, one as a manager. By default, the manager is also a worker and can host containers

When interacting with the swarm, you'll always talk directly to the docker swarm manager. You can then create an nginx service like above, and the service would be created on either the manager or the worker node. Then, to access your container via its port, you'll need to access the manager node's via its ip, which will forward the request to the container, be it on the worker or the manager's node. You can also scale it and see the load balancing happening, as it will query both container in a round robin fashion.

Internal service discovery

Since docker 1.12, there also is an internal service discovery feature that lets you access other services using its service dns.

To access this feature, you'll need to create an overlay network, and attach your service to it

 docker network create --driver overlay mynetwork
 docker service create --name nginx --network mynetwork nginx
 docker service create --name testing --network mynetwork node sleep 10000 #node because it already has ping cmd
 #locate your testing service's container, and ping the nginx host
 docker exec -ti ping nginx
 #See the magic happen

Once again, a lot of things are in the documentation, on the Docker Engine > Manage a swarm section. See Swarm mode overview

While the answer given by @MagicMicky is correct, I'll try to add more context on the difference between Swarm Legacy and Swarm Mode regarding Service Discovery:

Note : I'll refer to the first version of Swarm as Swarm legacy and the new version as Swarm mode .

Service discovery with Swarm Legacy

Using Swarm Legacy , you had to deploy your own Zookeeper , Consul or Etcd to manage the cluster topology meaning nodes being assigned as Agents in the cluster. These distributed Key/Value stores were used for health monitoring and distributed locking purposes. Those were not used by Swarm to manage service discovery but only cluster node discovery and monitoring .

If you wanted Service Discovery for your containers deployed through Swarm, you had to setup an external Consul/Registrator/DNS for example and register your services on those solutions. An example on top of my mind of such a system built specifically for Swarm was Wagl .

With later versions of the docker engine ( 1.11 ), you also had access to an in-built DNS when creating overlay networks and assigning containers to an overlay network. Before 1.11 , the (controversial) mechanism for service discovery was to append service entries through /etc/hosts .

In any case, overlay networking was not directly included with Swarm and this was a separate component requiring its own setup. It was more of an "add-on".

Generally the "philosophy" behind the first version of Swarm was to provide something simple and reliable to manage containers across hosts, if you needed more capabilities added to it, for example Service Discovery or Load Balancing, you had to roll your own.


Service Discovery with Docker Swarm Mode

As of Docker 1.12 service discovery is directly included in docker through the Swarm mode with an embedded DNS and Load Balancer . Meaning there is no need for an external component to manage Service Discovery and Load Balancing anymore.

When you create a service and assign it to an overlay, its DNS name is registered and other services part of the overlay can access it through its service name. Tasks running for a service are properly Load Balanced using the built-in LB.

For Swarm mode, the "philosophy" is more about including everything out of the box (Certificate management and rotation, service discovery, load balancing, cluster metadata through an in-built datastore, networking, scheduling) to ensure that you have the most complete system possible from day one. You are still able to swap and replace some of the components if needs be.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM