简体   繁体   中英

Does a master Jenkins keep track of a dynamic job created on a slave node?

Amidst a Jenkins job build, using a Groovy script, we can create new jobs dynamically. More on this .

We have a one-master-and-n-slave-nodes architecture.


We create any Jenkins job (say some-pipeline-job ) that gets configured on the master Jenkins obviously .

On triggering the build of this job ( some-pipeline-job ), the build can run on any slave node.


Consequences:

1) This job ( some-pipeline-job ) build creates a workspace for each build that can run on any slave node

2) This job ( some-pipeline-job ) has the Groovy code to create a new dynamic job (say job23 ) in runtime, amidst its build


Goal:

Disk management of workspaces of any job build across slave nodes, using a second step mentioned in this procedure , based on some criteria like numberOfDaysOld builds, etc...


1)

Can that second step mentioned in cloudbees-support take care of cleaning workspaces for all the builds of specific job ( some-pipeline-job ) run across multiple slave Jenkins nodes?

2)

Does the master Jenkins have information about this dynamic job ( job23 ) created by some-pipeline-job , at runtime? How can I ensure that a dynamic job gets tracked (configured) in the master Jenkins?

3)

If yes, can that second step mentioned in cloudbees-support take care of cleaning workspace of job23 build?

There are several strategies to clean workspaces. The easiest would be to use the WipeWorkspace extension for the checkout step.

checkout([
   $class: 'GitSCM',
   branches: scm.branches,
   extensions: scm.extensions + [[$class: 'WipeWorkspace']],
   userRemoteConfigs: scm.userRemoteConfigs
])

You seem to need something more elaborate. You can list jenkins slaves with hudson.model.Hudson.instance.slaves

What I would do is schedule a pipeline job, add the following functions to it

@NonCPS
def onlineSlaves() {
    def slaves = []
    hudson.model.Hudson.instance.slaves.each {
        try {
            def computer = it.computer
            if (!computer.isOffline()) {
                slaves << it.name
            }
        } catch (error) {
            println error
        }
    }
    return slaves
}

// Run a command on each slave in series
def shAllSlaves(unixCmdLine) {
    onlineSlaves().each {
        try {
            node(it) {
                if (isUnix()) {
                    sh "${unixCmdLine}"
                }
            }
        } catch (error) {
            println error
        }
    }
}

and execute a sh command like find to delete old folders.

script {
    def numberOfDaysOld = 10
    shAllSlaves "find /path/to/base/dir/* -type d -ctime +${numberOfDaysOld } -exec rm -rf {} \;"
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM