So, the situation is the following. I have two containers, one offering a DB service and another one offering a front end. The front end container connects to the DB container using one of its ports and then publishes one of its own ports to offer a series of RESTful services.
This configuration runs just fine in the default bridge. However, I have read in the Docker documentation that it is not recommendable to run your containers on the default bridge in a production environment, because those ports would be exposed to any machine and not just within the network. They recommend using a custom bridge in this kind of situations.
The idea would be (as in one of the use cases described in the documentation), that my front end is reachable from the host by publishing the corresponding port, but not so the DB container, which should only be accessible to the front end container connected to the same custom bridge.
I have setup such a configuration, but now, even though the port of the front end has been exposed and published, it is not accessible from the host machine. I guess I have done something incorrectly or misunderstood some concept, but I cannot seem to figure it out.
The steps I have taken are the following:
Create custom network:
docker network create --subnet=172.19.0.0/16 -o com.docker.network.bridge.enable_ip_masquerade=true -o com.docker.network.bridge.host_binding_ipv4="172.19.0.1" -o com.docker.network.bridge.enable_icc=true -o com.docker.network.bridge.name="serversBridge" servers
Run DB container:
docker run -d --name testDb --network servers --ip 172.19.0.2 couchdb
Run front end container:
docker run --name myApp --network servers --ip 172.19.0.3 -p 12345:12345 myApp
If I then run docker ps
, I get the next line:
`CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6b8e93e6e9e1 myApp "./myApp" 4 hours ago Up 4 hours 172.19.0.1:12345->12345/tcp myApp`
However, if I try to access that IP address and port from my host machine, I will get a "connection refused" message. I have checked the IP tables and there is a rule for this, though:
target prot opt source destination
ACCEPT tcp -- anywhere 172.19.0.3 tcp dpt:12345
So my current guess is that my request is actually being forwarded to the container and rejected by it. Is there something I have done wrongly or some concept I have misunderstood?
I have eventually solved this problem by running the application inside the container on the local broadcast IP address ( 0.0.0.0
) instead of binding it to listen in the localhost IP address ( 127.0.0.1
).
I understand that, the host machine accessing the IP address 172.19.0.1
in order to reach this container, the requests wouldn't come from 127.0.0.1
interface and they would be refused.
This was so even when I tried forwarding the port to the 0.0.0.0:12345
of the host machine and accessing the network by sending a request from the host to localhost:12345
, because my default bridge would create a NAT masquerade and forward the request locally using the corresponding internal IP address ( 172.19.0.3
).
What I still don't fully understand is why this same configuration worked on the default bridge. I assume the masquerade for it is not done in the same way
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.