I'm getting asymmetrical container discoverability with multicontainer docker on AWS. Namely, the first container can find the second, but the second cannot find the first.
I have a multicontainer docker deployment on AWS Elastic Beanstalk. Both containers are running Node servers using identical initial code, and are built with identical Dockerfiles. Everything is up to date.
Anonymized version of my Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "firstContainer",
"image": "firstContainerImage",
"essential": true,
"memoryReservation":196,
"links":[
"secondContainer",
"redis"
],
"portMappings":[
{
"hostPort":80,
"containerPort":8080
}
]
},
{
"name": "secondContainer",
"image": "secondContainerImage",
"essential": true,
"memoryReservation":196,
"environment":
"links":[
"redis"
]
},
{
"name": "redis",
"image": "redis:4.0-alpine",
"essential": true,
"memoryReservation":128
}
]
}
The firstContainer
proxies a subset of requests to secondContainer
on port 8080, via the address http://secondContainer:8080
, which works completely fine. However, if I try to send a request the other way, from secondContainer
to http://firstContainer:8080
, I get a "Bad Address" error of one sort or another. This is true both from within the servers running on these containers, and directly from the containers themselves using wget
. It's also true when trying different exposed ports.
If I add "firstContainer"
to the "links"
field of the second container's Dockerrun file, I get an error.
My local setup, using docker-compose, does not have this problem at all.
Anyone know what the cause of this is? How can I get symmetrical discoverability on an AWS multicontainer deployment?
I got a response from AWS support on the topic.
The links are indeed one-way, which is an unfortunate limitation. They recommended taking one of two approaches:
I opted for a 3rd approach, which was to have the container that can discover the rest send out pings to inform the others of its docker-network IP address.
I'm exploring another option which is a mix of links, port mappings and extraHosts.
{
"name": "grafana",
"image": "docker.pkg.github.com/safecast/reporting2/grafana:latest",
"memoryReservation": 128,
"essential": true,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 3000
}
],
"links": [
"renderer"
],
"mountPoints": [
{
"sourceVolume": "grafana",
"containerPath": "/etc/grafana",
"readOnly": true
}
]
},
{
"name": "renderer",
"image": "grafana/grafana-image-renderer:2.0.0",
"memoryReservation": 128,
"essential": true,
"portMappings": [
{
"hostPort": 8081,
"containerPort": 8081
}
],
"mountPoints": [],
"extraHosts": [
{
"hostname": "grafana",
"ipAddress": "172.17.0.1"
}
]
}
This allows the grafana to resolve renderer
via links as usual, but the renderer container resolves grafana
to the host IP ( 172.17.0.1
the default docker bridge gateway) which has port 3000 bound back to the grafana port.
So far it seems to work. The portMappings
on renderer might not be required, but I'm still working out all the kinks.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.