[英]Does a master Jenkins keep track of a dynamic job created on a slave node?
Amidst a Jenkins job build, using a Groovy script, we can create new jobs dynamically.在 Jenkins 作业构建中,使用 Groovy 脚本,我们可以动态创建新作业。 More on this .
更多关于这一点。
We have a one-master-and-n-slave-nodes architecture.我们有一个主节点和从节点的架构。
We create any Jenkins job (say some-pipeline-job
) that gets configured on the master Jenkins obviously .我们创建任何 Jenkins 作业(比如
some-pipeline-job
),这些some-pipeline-job
显然在主 Jenkins 上配置。
On triggering the build of this job ( some-pipeline-job
), the build can run on any slave node.在触发此作业的构建(
some-pipeline-job
)时,构建可以在任何从节点上运行。
Consequences:结果:
1) This job ( some-pipeline-job
) build creates a workspace for each build that can run on any slave node 1)此作业(
some-pipeline-job
)构建为每个构建创建了一个工作空间,该工作空间可以在任何从节点上运行
2) This job ( some-pipeline-job
) has the Groovy code to create a new dynamic job (say job23
) in runtime, amidst its build 2) 这个作业 (
some-pipeline-job
) 有 Groovy 代码在运行时创建一个新的动态作业 (比如job23
),在它的构建中
Goal:目标:
Disk management of workspaces of any job build across slave nodes, using a second step mentioned in this procedure , based on some criteria like numberOfDaysOld builds, etc...跨从属节点构建的任何作业的工作区的磁盘管理,使用此过程中提到的第二步,基于一些标准,如 numberOfDaysOld 构建等...
1) 1)
Can that second step mentioned in cloudbees-support take care of cleaning workspaces for all the builds of specific job ( some-pipeline-job
) run across multiple slave Jenkins nodes?在cloudbees-support 中提到的第二步能否为跨多个从属 Jenkins 节点运行的特定作业(
some-pipeline-job
)的所有构建清理工作空间?
2) 2)
Does the master Jenkins have information about this dynamic job ( job23
) created by some-pipeline-job
, at runtime?詹金斯大师是否有关于在运行时由
some-pipeline-job
创建的这个动态作业( job23
)的信息? How can I ensure that a dynamic job gets tracked (configured) in the master Jenkins?如何确保在主 Jenkins 中跟踪(配置)动态作业?
3) 3)
If yes, can that second step mentioned in cloudbees-support take care of cleaning workspace of job23
build?如果是, cloudbees-support 中提到的第二步是否可以清理
job23
构建的工作区?
There are several strategies to clean workspaces.有几种清洁工作区的策略。 The easiest would be to use the WipeWorkspace extension for the checkout step.
最简单的方法是在结帐步骤中使用 WipeWorkspace 扩展。
checkout([
$class: 'GitSCM',
branches: scm.branches,
extensions: scm.extensions + [[$class: 'WipeWorkspace']],
userRemoteConfigs: scm.userRemoteConfigs
])
You seem to need something more elaborate.你似乎需要更复杂的东西。 You can list jenkins slaves with
hudson.model.Hudson.instance.slaves
您可以使用
hudson.model.Hudson.instance.slaves
列出 jenkins slaves
What I would do is schedule a pipeline job, add the following functions to it我要做的是安排一个管道作业,向其中添加以下功能
@NonCPS
def onlineSlaves() {
def slaves = []
hudson.model.Hudson.instance.slaves.each {
try {
def computer = it.computer
if (!computer.isOffline()) {
slaves << it.name
}
} catch (error) {
println error
}
}
return slaves
}
// Run a command on each slave in series
def shAllSlaves(unixCmdLine) {
onlineSlaves().each {
try {
node(it) {
if (isUnix()) {
sh "${unixCmdLine}"
}
}
} catch (error) {
println error
}
}
}
and execute a sh command like find to delete old folders.并执行类似于 find 的 sh 命令以删除旧文件夹。
script {
def numberOfDaysOld = 10
shAllSlaves "find /path/to/base/dir/* -type d -ctime +${numberOfDaysOld } -exec rm -rf {} \;"
}
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.