简体   繁体   中英

Services don't start on docker swarm nodes

I want to deploy HA Postgresql with Failover Patroni and HAProxy (like single entrypoint) in docker swarm.

I have docker-compose.yml -

version: "3.7"

services:
    etcd1:
        image: patroni
        networks:
          - test
        env_file:
          - docker/etcd.env
        container_name: test-etcd1
        hostname: etcd1
        command: etcd -name etcd1 -initial-advertise-peer-urls http://etcd1:2380


    etcd2:
        image: patroni
        networks:
          - test
        env_file:
          - docker/etcd.env
        container_name: test-etcd2
        hostname: etcd2
        command: etcd -name etcd2 -initial-advertise-peer-urls http://etcd2:2380

    etcd3:
        image: patroni
        networks:
          - test
        env_file:
          - docker/etcd.env
        container_name: test-etcd3
        hostname: etcd3
        command: etcd -name etcd3 -initial-advertise-peer-urls http://etcd3:2380

    patroni1:
        image: patroni
        networks:
          - test
        env_file:
          - docker/patroni.env
        hostname: patroni1
        container_name: test-patroni1
        environment:
            PATRONI_NAME: patroni1
        deploy:
          placement:
            constraints: [node.role == manager]
#              - node.labels.type == primary
#              - node.role == manager

    patroni2:
        image: patroni
        networks:
          - test
        env_file:
          - docker/patroni.env
        hostname: patroni2
        container_name: test-patroni2
        environment:
            PATRONI_NAME: patroni2
        deploy:
          placement:
            constraints: [node.role == worker]
#              - node.labels.type != primary
#              - node.role == worker

    patroni3:
        image: patroni
        networks:
          - test
        env_file:
          - docker/patroni.env
        hostname: patroni3
        container_name: test-patroni3
        environment:
            PATRONI_NAME: patroni3
        deploy:
          placement:
            constraints: [node.role == worker]
#              - node.labels.type != primary
#              - node.role == worker

    haproxy:
        image: patroni
        networks:
          - test
        env_file:
          - docker/patroni.env
        hostname: haproxy
        container_name: test-haproxy
        ports:
            - "5000:5000"
            - "5001:5001"
        command: haproxy

networks:   
  test:
    driver: overlay
    attachable: true

And deploy this services in docker swarm with this command:

docker stack deploy --compose-file docker-compose.yml test

When i use this command, my services is creating, but service patroni2 and patroni3 don't start on other nodes, which roles are worker . They don't start at all!

I want to see my services deploy on all nodes (3 - one manager and two workers) which existing in docker swarm But if i delete constraints, all my services start on one node, when i deploy docker-compose.yml in swarm.

May be this services can't see my network, though i deploy it using docker official documentation.

With different service names, docker will not attempt to spread containers across multiple nodes, and will fall back to the least used node that satisfies the requirements, where least used is measured by the number of scheduled containers.

You could attempt to solve this by using the same service name and 3 replicas. This would require that they be defined identically. To make this work, you can leverage a few features, the first being that etcd.tasks will resolve to the individual ip addresses of each etcd service container. And the second are service templates which can be used to inject values like {{.Task.Slot}} into the settings for hostname, volume mounts, and env variables. The challenge is the list at the end will likely not give you what you want, which is a way to uniquely address each replica from the other replicas. Hostname seems like it would work, but it unfortunately does not resolve in docker's DNS implementation (and wouldn't be easy to implement since it's possible to create a container with the capabilities to change the hostname after docker has deployed it).

The option you are left with is configuring constraints on each service to run on specific nodes. That's less than ideal, and reduces the fault tolerance of these services. If you have lots of nodes that can be separated into 3 groups then using node labels would solve the issue.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM