简体   繁体   English

Jenkins API响应调整

[英]Jenkins API response tuning

We have built a dashboard on top of Jenkins which enables users to see only jobs relevant to the project and also trigger a build. 我们在詹金斯(Jenkins)之上构建了一个仪表板,该仪表板使用户能够仅查看与该项目相关的工作并触发构建。 The UI is built using reactJS and the backend is JAVA REST WebServices. UI是使用reactJS构建的,后端是JAVA REST WebServices。

WebService calls the Jenkins api to fetch Job information and converts the data to JSON for feeding the UI. WebService调用Jenkins api来获取Job信息,并将数据转换为JSON以供UI使用。 At present we have around 200 jobs on the Dashboard. 目前,仪表板上有大约200个工作。 Its taking around 2 mins for the Jenkins API to respond with the details. Jenkins API需要大约2分钟的时间来响应细节。

Jenkins is running on a Linux box Jenkins在Linux机器上运行

OracleLinux 6 x Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz / 39.25 GB

Jenkins Version - 1.564 with 16 Executors and more than 2000 Jobs Jenkins版本-1.564,带有16个执行器和2000多个工作

Sample API  Call - http://jenkins:8080/job/jobName/api/json?tree=displayName,builds[result],lastBuild[estimatedDuration,result,duration,number,timestamp,actions[causes[userName]]]

The api is called 200 times for 200 Jobs to fetch details of each job. 该API被调用200次以获取200个作业,以获取每个作业的详细信息。

Any advice on how to speed up the API response. 有关如何加快API响应速度的任何建议。

I considered increasing the RAM On the linux box and tuning the JVM OPTS. 我考虑过增加Linux机器上的RAM并调整JVM OPTS。 Also upgrading the Jenkins to latest LTS. 还将詹金斯家族升级到最新的LTS。

Low-hanging fruit: 低挂水果:

  1. Run the requests in parallel , ie, not one after another. 并行运行请求,即不要一个接一个地运行。
  2. If you do that and if you use the standard jetty container , try increasing the number of worker processes with the --handlerCountMax option (the default is 40). 如果这样做,并且使用标准jetty容器 ,请尝试使用--handlerCountMax选项(默认值为40)来增加辅助进程数量

Eventually, you should try to avoid performing 200 individual requests . 最终,您应该避免执行200个单独的请求 Depending on your setup, the security checks for every request alone can cause a substantial overhead. 根据您的设置,仅针对每个请求的安全检查都可能导致大量开销。

Therefore, the cleanest solution will be to gather all the data that you need from a single Groovy script on the master (you can do that via REST also): 因此, 最干净的解决方案是从主服务器上的单个Groovy脚本收集所需的所有数据 (也可以通过REST来完成此操作):

  • this reduces the number of requests to 1 这将请求数量减少到1
  • and it allows for further optimization, possibly circumventing the problems mentioned in the comment of Jon S above 并且可以进行进一步的优化,从而可能避免上述乔恩·S(Jon S)评论中提到的问题

As it seems like you're not hitting any lazy loading issues on your server (since you only have 60 builds per job), the problems are probably related to overhead as Alex O suggests. 好像您在服务器上没有遇到任何延迟加载问题(因为每个作业只有60个内部版本),所以这些问题可能与Alex O建议的开销有关。 Also Alex O's suggested doing it all in a single request. 同样,Alex O建议在单个请求中完成所有操作。 This can be done with the following request: 可以通过以下请求完成:

http://jenkins:8080/api/json?tree=jobs[displayName,builds[result],lastBuild[estimatedDuration,result,duration,number,timestamp,actions[causes[userName]]]]

Instead of relying on the job API we use the jenkins API where we can fetch the data for all jobs at a single request. 不用依赖作业API,我们使用jenkins API,在该API中,我们可以在单个请求中获取所有作业的数据。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM