简体   繁体   中英

GitLab CI ssh registry login

I have a GitLab project gitlab.com/my-group/my-project which has a CI pipeline that builds an image and pushes it to the project's GitLab registry registry.gitlab.com/my-group/my-project:tag . I want to deploy this image to Google Compute Engine, where I have a VM running docker.

Easy enough to do it manually by ssh'ing into the VM, then docker login registry.gitlab.com and docker run ... registry.gitlab.com/my-group/my-project:tag . Except the docker login command is interactive, which is a no-go for CI. It can accept a username and password on the command line, but that hardly feels like the right thing to do, even if my login info is in a secret variable (storing my GitLab login credentials in a GitLab secret variable?...)

This is the intended workflow on the Deploy stage of the pipeline:

  1. Either install the gcloud tool or use an image with it preinstalled
  2. gcloud compute ssh my-gce-vm-name --quiet --command \\ "docker login registry.gitlab.com && docker run registry.gitlab.com/my-group/my-project:tag"

Since the gcloud command would be running within the GitLab CI Runner, it could have access to secret variables, but is that really the best way to log in to the GitLab Registry over ssh from GitLab ?

I'll answer my own question in case anyone else stumbles upon it. GitLab creates ephemeral access tokens for each build of the pipeline that give the user gitlab-ci-token access to the GitLab Registry. The solution was to log in as the gitlab-ci-token user in the build.

.gitlab-ci.yml (excerpt):

deploy:
  stage: deploy
  before_script:
    - gcloud compute ssh my-instance-name --command "docker login registry.gitlab.com/my-group/my-project -u gitlab-ci-token -p $CI_BUILD_TOKEN"

The docker login command creates a local configuration file in which your credentials are stored at $HOME/.docker/config.json that looks like this (also see the documentation on this):

{
    "auths": {
        "<registry-url>": {
            "auth": "<credentials>"
        }
    }
}

As long as the config.json file is present on your host and your credentials (in this case simply being stored as base64("<username>:<password>") ) do not change, there is no need to run docker login on every build or to store your credentials as variables for your CI job.

My suggestion would be to simply ensure that the config.json file is present on your target machine (either by running docker login once manually or by deploying the file using whatever configuration management tool you like). This saves you from handling the login and managing credentials within your build pipeline.

Regarding the SSH login per se; this should work just fine. If you really want to eliminate the SSH login, you could setup the Docker engine on your target machine to listen on an external socket, configure authentication and encryption using TLS client certificates as described in the official documentation and directly talk to the remote server's Docker API from within the build job:

variables:
  DOCKER_HOST: "tcp://<target-server>:2376"
  DOCKER_TLS_VERIFY: "1"
script:
  - docker run registry.gitlab.com/my-group/my-project:tag

We had the same "problem" on other hosting providers. Our solution is to use some kind of custom script which runs on the target machine and can be called via a Rest-Api Endpoint (secured by Basic-Auth or what ever).

So you could just trigger the remote host to do the docker login and upgrade your service without granting SSH Access via gitlab-ci.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM