long story short: Container C on Docker Swarm host A can access Nginx ( deploy mode:global
) on Docker Swarm host B but not on Docker Swarm host A via Docker host's IP, connection timed out
.
Long story: I have a Docker Swarm with 3 hosts. All Docker containers are running on a scope:swarm
and driver:overlay
.network, called internal.network
. On the swarm I have also 3 Nginx ( deploy mode: global
) running. The Nginx have the default.network set to internal.network
but also ports
configuration with target:80,published:80,protocol:tcp,mode:host
(and other ports). The idea is that connections to the Docker swarm hosts are forwarded to the Nginx containers and then forwarded (reverse-proxied) to the Docker containers running on the swarm such as GitLab, Mattermost, and others. Moreover, the Docker swarm hosts have keepalived
configured to share the same IP (fail over) - so no matter what Docker host this shared IP is assigned to, there is always an Nginx running to accept incoming requests. I am using Oracle Linux 8 (kernel 5.4.17 el8uek) and Docker 20.10.12. Docker is configured with icc: false
and userland-proxy: false
.
In the following example addr.foo
resolves to shared ip
.
What works:
keepalived
as it occurs with the Docker hosts' IPs too.internal.network
and Mattermost can communicate with that PostgreSQL instance on the internal.network
.curl https://addr.foo
and curl https://<shared ip>
and to access Nginx and the reverse-proxied Docker containercurl https://<host ip>
and access Nginx and the reverse-proxied Docker containercurl https://addr.foo
or curl https://<shared IP>
when the shared IP is not hosted by the Docker host that is hosting the Docker container itself. What does not work:
curl
and point to the Docker swarm host that is hosting the container. Curl (the container, docker) resolves the IP of its own Docker swarm host (eg curl https://<Docker host name>
) which is correct but then the connection times out. curl
and point to the shared IP when the shared IP is hosted by the Docker host that is running the container. The curl
connection times out when accessing the containers Docker host. So from inside a container it is not possible to connect to the the containers Docker host's IP but to other Docker hosts' IPs. The.network interface ens192
on all Docker hosts is in firewall-zone public
with all necessary ports open, external access works.
So my problem is: From within a Docker container it is not possible to establish a connection to the Docker host that is hosting the Docker container but it is possible to connect to another host.
On host docker host 1 with addr.foo
resolving to docker host 2:
docker exec -it <nginx container id> curl https://addr.foo
[...] valid response
docker exec -it <nginx container id> curl https://<docker host 2>
[...] valid response
docker exec -it <nginx container id> curl https://<docker host 1>
connection timed out
Why do I need it: Mattermost authenticates users via GitLab. Therefore, Mattermost needs to connect to GitLab. When Mattermost and GitLab are running on the same Docker swarm host, Mattermost cannot connect to GitLab.
What I do not want to do: Restrict GitLab and Mattermost to not run on the same swarm host.
I also tried to move interface docker_gwbridge
to firewall-zone trusted
which led to the problem that the Docker containers did not start up.
I hope that this is enough information to get the idea.
Ok, found the answer here I guess: Docker Userland Proxy .
In the previous section we identified two scenarios where Docker cannot use iptables NAT rules to map a published port to a container service:
When a container connected to another Docker.network tries to reach the service (Docker is blocking direct communication between Docker.networks);
When a local process tries to reach the service through loopback interface.
This is what userland-proxy
is for and setting it to true
(default) enables the desired behavior.
When communicating between containers you use the service name of the docker service not the host IP.
Try from cli of one container ping the other containers based on the service name. If no reply then they are not on the same overlay.network.
Faced a similar problem. In my case, nginx did not correctly determine the ip address of the container. An explicit indication of the nginx's directive helped:
resolver 127.0.0.11 ipv6=off;
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.