简体   繁体   中英

docker build inside official jenkins container

I run official Jenkins container on docker. I need to build docker image as post action of successful build, but Jenkins container doesn't have docker binary.

I see couple options, first as deriving my own Jenkins container from official image with docker binary available. Second option is to use dedicated Jenkins slave with docker and other necessary runtime available. Third option would be to provision Jenkins server with ansible. I would like to run everything on container, as it is clean, simple and easily repeatable.

How have you solved this problem? Which is better solution in the long run and why? My highest priority is to be able to provision, configure and bootstrap the whole CI infrastructure with single ansible command. Also, the built docker container will be pushed to registry and so on, so the connectivity between components should be optimum with minimum complexity or manual configuration.

Install docker in a container is not a good idea.

related article

However you can reach the docker daemon that run on your host by mounting the docker socket. It should be good for testing purpose, but don't run like this in production mode since it create security issue.

related article

You can certainly find a cleaner solution managing your deployment/build process directly from your host watching the exit status of your Jenkins container.

We have a similar setup now with GoCD where GoCD agents running in docker containers and have to build images on successful pipeline.

tl;dr Still a hacky way but this is the best behaving option: Use TCP connection from the container to the docker host which runs Jenkins. Mind the security implications though as @Raphayol mentioned.

Here is what we tried:

1) Run docker inside docker

Not a good idea. Results in various hanging situation where the IO subsystems just barks and reboot is necessary.

2) Build on swarm

Swarm cluster or any other docker cluster are meant run containers not to build them. Old containers get pushed back as latest because build and push are not guaranteed to be executed on the same node.

3) Dedicated build host

Although this works beats the purpose of worker nodes and auto-scaling becomes tricky.

4) Mount docker socket

Sort of works but on heavy load it produces random IO locks and restarting docker daemon is necessary

5) Connect back via TCP

This option works for months now and although it's not a clear solution if your jenkins build servers are well isolated you can live with this.

[root@ip-10-10-10-10 ~]# docker ps
CONTAINER ID        IMAGE                              COMMAND             CREATED             STATUS              PORTS                              NAMES
e3630d84909e        registry.backbase.com/gocd-agent   "/sbin/my_init"     2 minutes ago       Up 2 minutes        0.0.0.0:9040-9045->9040-9045/tcp   docker_agent_1
[root@ip-10-10-10-10 ~]# docker exec -it e3 env|grep DOCKER
DOCKER_TLS_VERIFY=yes
DOCKER_HOST=tcp://10.10.10.10:2376
DOCKER_CERT_PATH=/var/go/docker-certs

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM