简体   繁体   中英

How to run Dockerfile agent on a Jenkins Slave Node?

Please help me with below. I have Jenkins installed on one shared Server. currently used by multiple teams so no additional packages/software are allowed to install there. Each team has their dedicated slave nodes added where they configure as per their need. In Same way, we have one dedicated node to run our jobs.(this is where I want to run my job too)

I have bitbucket repo, with " Jenkinsfile " which will have deployment steps and " Dockerfile " where we like to created container from it and run the deployment steps in it.

I'm trying to test this first with some example. so I have " Dockerfile " like this.

#This image is Developed from ubuntu:18.04-Dockerhub
FROM ubuntu:18.04

#Updating System Packages and installing required packages
RUN apt-get update && \
    apt-get install -y openssh-server wget git curl zip unzip && \
    apt-get clean

#Installing rsync
RUN apt-get install -y rsync 

where I have " Jenkinsfile " like below.

pipeline {
    agent { node { label 'slave_node' } }
    stages {
        stage('Test') {
            agent {
                dockerfile true
            }
            steps {
                sh 'cat /etc/os-release'
                sh 'curl --version'
                sh 'echo Successfully compiled'
            }
        }
    }
} 

when I execute this Pipeline job,

  • Its gets the Jenkins file.
  • starts correctly on Slave node mentioned.
  • checkouts the repo code.
  • proceeds to stages
  • but when it comes to stage('Test'), Node changes back to "Master Jenkins" and again its starts checking out repo code. [this is where I have issue. I don't know why this is getting switched]
  • proceeds to stages
  • Run the "Docker File"
  • Try to pull the Image from Hub. but fails the job as "Docker command not found" (expected error as Master node doesn't have the setup.

Question is: Why Job getting switched from Slave_node to master_node? Please help. thanks in advance.

If I run this on my Personal laptop, Its works perfectly fine.

I would check out your Docker settings in Jenkins. It may be that you've defined the master node as the default Jenkins agent, so when you run with only 'dockerfile: true' it attempts to run the build on the master node.

You can find the reference to this specific option by searching for 'Specifying a Docker Label' in this documentation.

https://www.jenkins.io/doc/book/pipeline/docker/

Referring to https://www.jenkins.io/doc/book/pipeline/docker/ I was able to achieve my need using, scripted pipeline. so posting that as answer here.

but I still don't know whats wrong with Declarative pipeline.

node ('slave_node') {
    checkout scm
    def customImage = docker.build("custom-image-name")
    customImage.inside {
        sh 'Inside Container'
        sh 'cat /etc/os-release'
    }
}

Thanks

Found another way to achieve this. we can use dockerfile funtion itself to combine with any agent node.

example as below

#!/usr/bin/env groovy
pipeline {
  agent {
    dockerfile {
      filename 'Dockerfile'
      label 'slave_node'
    }
  }
  stages {
    stage("example stage") {
      steps {
        script {
          sh 'cat /etc/os-release'
          sh 'curl --version'
          sh 'echo Successfully compiled'
        }
      }
    }
  }
}

source can be found here: syntax and search for keyword "dockerfile" to find the details.

The following code has also worked, this can be used if you don't want to build the docker image in real-time. So there is no need for Dockerfile, we can directly specify the image name.

pipeline {
    agent {
        docker { 
            image 'image-name'                 
            label 'common-build-machine' 
        }
    }
...
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM