简体   繁体   中英

Docker and jenkins

I am working with docker and jenkins, and I'm trying to do two main tasks :

  1. Control and manage docker images and containers (run/start/stop) with jenkins.
  2. Set up a development environment in a docker image then build and test my application which is in the container using jenkins.

While I was surfing the net I found many solutions :

  • Run jenkins as container and link it with other containers.
  • Run jenkins as service and use the jenkins plugins provided to support docker.
  • Run jenkins inside the container which contain the development environment.

So my question is what is the best solution or you can suggest an other approach.

One more question I heard about running a container inside a container. Is it a good practice or better avoid it ?

To run Jenkins as a containerized service is not a difficult task. There are many images out there that allow you to do just that. It took me just a couple minutes to make Jenkins 2.0-beta-1 run in a container, compiling from source (image can be found here ). Particularity I like this approach, you just have to make sure to use a data volume or a data container as jenkins_home to make your data persist.

Things become a little bit trickier when you want to use this Jenkins - in a container - to build and manage containers itself. To achieve that, you need to implement something called docker-in-docker, because you'll need a docker daemon and client available inside the Jenkins container.

There is a very good tutorial explaining how to do it: Docker in Docker with Jenkins and Supervisord .

Basically, you will need to make the two processes (Jenkins and Docker) run in the container, using something like supervisord. It's doable and proclaims to have good isolation, etc... But can be really tricky, because the docker daemon itself has some dependencies, that need to be present inside the container as well. So, only using supervisord and running both processes is not enough, you'll need to make use of the DIND project itself to make it work... AND you'll need to run the container in privileged mode... AND you'll need to deal with some strange DNS problems...

For my personal taste, it sounded too much workarounds to make something simple work and having two services running inside one container seems to break docker good practices and the principle of separation of concerns, something I'd like to avoid.

My opinion got even stronger when I read this: Using Docker-in-Docker for your CI or testing environment? Think twice . It's worth to mention that this last post is from the DIND author himself, so he deserves some attention.

My final solution is: run Jenkins as a containerized service, yes, but consider the docker daemon as part of the provisioning of the underlying server, even because your docker cache and images are data that you'll probably want to persist and they are fully owned and controlled by the daemon.

With this setup, all you need to do is mount the docker daemon socket in your Jenkins image (which also needs the docker client, but not the service):

$ docker run -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock -v local/folder/with/jenkins_home:/var/jenkins_home namespace/my-jenkins-image

Or with a docker-compose volumes directive:

---
version: '2'

services:
  jenkins:
    image: namespace/my-jenkins-image
    ports:
      - '8080:8080'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - local/folder/with/jenkins_home:/var/jenkins_home

# other services ...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM