简体   繁体   中英

Connect Docker containers to same network as current Container in Azure Pipelines

I am running an Azure Container Job, where I spin up a different Docker container manually like this:

jobs:
      - job: RunIntegrationTests
        pool:
          vmImage: "ubuntu-18.04"
        container:
          image: mynamespace/frontend_image:latest
          endpoint: My Docker Hub Endpoint
        steps:
          - script: |
              docker run --rm --name backend_container -p 8000:8000 -d backend_image inv server

I have to create the container manually since the image lives in AWS ECR, and the password authentication scheme that Azure provides for it can only be used with a token that expires, so it seems useless. How can I make it so that my_container is reachable from within subsequent steps of my job?. I have tried starting my job with:

options: --network mynetwork

And share it with "backend_container", but I get the error:

docker: Error response from daemon: Container cannot be connected to.network endpoints: m.network

While starting the "frontend" container, which might be because Azure is trying to start a container on multiple.networks.

To run a container job, and attach a custom image to the created.network, you can use a step as showed in the below example:

steps:
  - task: DownloadPipelineArtifact@2
    inputs:
      artifactName: my-image.img
      targetPath: images
    target: host    # Important, to run this on the host and not in the container

  - bash: |
      docker load -i images/my-image.img
      docker run --rm --name my-container -p 8042:8042 my-image

      # This is not really robust, as we rely on naming convections in Azure Pipelines
      # But I assume they won't change to a really random name anyway.
      network=$(docker network list --filter name=vsts_network -q)

      docker network connect $network my-container
      docker network inspect $network
    target: host

Note: it's important the these steps run in the host, and not in the container (that is run for the container-job). This is done by specifying target: host for the task .

In the example the container from the custom image can the be addressed by my-container .

I ended up not using the container: property altogether, and started all containers manually, so that I can specify the same.network:

steps:
    - task: DockerInstaller@0
    displayName: Docker Installer
    inputs:
        dockerVersion: 19.03.8
        releaseType: stable
    - task: Docker@2
    displayName: Login to Docker hub
    inputs:
        command: login
        containerRegistry: My Docker Hub
    - script: |
        docker network create integration_tests_network
        docker run --rm --name backend --network integration_tests_network -p 8000:8000 -d backend-image inv server
        docker run --rm --name frontend -d --network integration_tests_network frontend-image tail -f /dev/null

And run subsequents commands on the frontend container with docker exec

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM