简体   繁体   中英

Elastic Beanstalk Docker private registry with docker-compose

I have a Docker image from a private registry that is used for a team project.
A Docker-compose.yml is git-cloned by each team member to allow for ready-to-go config of volume, env and ports for the container.

version: '3'
services:
  webApp:
      image: my-private-registry/docker-app:latest
      ports:
        - 80:80
      volumes:
        - vendors:/var/www/app/vendor
        - ./var/logs/apache2:/var/log/apache2
volumes:
  vendors:

Now I wish to deploy that image/compose-file project to AWS Elastic Beanstalk, but the platform can not access the private Docker registry using the docker-compose file ( image may require docker login error)

Some info of what I've tried and noted so far :
A] If the image is public the docker-compose file ( which I just upload using the web console so far) does work, the image is pulled, a container is created and the app runs fine.
However if the image is private, it can not gain access, even after following the AWS instructions here .

{
  "AWSEBDockerrunVersion": "1",
  "Authentication": {
    "bucket": "my-s3-bucket",
    "key": "config.json"
  },
}

and by reading the eb-engine.log, I can see that the first docker-compose pull works fine but then later on the final docker-compose up fails - triggering the error, as if the auth were lost along the way.

I know the docker-compose pull works because setting wrong auth in the config.json on the S3 Bucket triggers an error.

B] The auth and config works perfectly with a private docker image if I only use Dockerrun.aws.json instead of the docker-compose file.

{
  "AWSEBDockerrunVersion": "1",
  "Authentication": {
    "bucket": "my-s3-bucket",
    "key": "config.json"
  },
  "Ports": [
     {
      "ContainerPort": 80,
      "HostPort": 80
     }
  ],
  "Image": {
     "Name": "my-private-registry/docker-app:latest",
     "Update": "true"
  },
  "Volumes" : [
    {
      "HostDirectory":"/var/app/current/var/logs/apache2",
      "ContainerDirectory":"/var/log/apache2"
    },
  ]
}

which is alright for testing purpose but forces us to depulicate any changes from docker-compose to it - since the compose file is used accross other non-AWS environnement, and will be less than ideal in the long run.

What am I missing? Is there a mismatch in the config of my env with the docker-compose?

Thanks

Update 1 By usins sudo watch -n 1 -d cat /root/.docker/config.json
I've been able to see that during the docker-compose pull the auth are present but as soon as the CleanEbExtensions is launched, they're gone.
And this command is launched BEFORE dockler-compose is executed - and a docker-compose down --rmi all is executed in-between, nullifying the pull.

How come?

UPDATE

Turns out it was an AWS Bug. I've detailled step for a workaround in my answer below.

Well turns out it was a bug on AWS side . I've found a very similar question

AWS EB docker-compose deployment from private registry access forbidden

the current solution was to employ the deploy hooks instead to either login do docker or copy the authfile.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM