简体   繁体   English

聚合存储在ElasticSearch中的Jenkins构建日志

[英]Aggregate Jenkins build logs stored in ElasticSearch

I'm storing my Jenkins build logs in ElasticSearch with the Jenkins Logstash plugin. 我使用Jenkins Logstash插件将我的Jenkins构建日志存储在ElasticSearch中。

My configuration looks sort of like this: 我的配置看起来像这样:

Logstash配置

That part works great, but I'd like to view the full log in Kibana. 该部分效果很好,但我想查看Kibana中的完整日志。

The plugin incrementally sends the results to ES and breaks on each newline. 该插件将结果逐步发送到ES,并在每个换行符处中断。 That means a long log can look something like this in Kibana: 这意味着长日志在Kibana中看起来可能像这样:

在Kibana中看到的日志

Where each line is a massive JSON output containing tons of fields I do not care about. 每行是一个巨大的JSON输出,其中包含我不关心的大量字段。 I really only care about the message field. 我真的只关心消息字段。

I'm reading about aggregators right now that appear to be what I need, but my results are not coming out to what I'd like. 我正在阅读有关聚合器的信息,这些聚合器似乎是我所需要的,但是我的结果并没有达到我想要的。

curl -X GET "localhost:9200/_search" -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "buildLog" : {
            "terms" : {
                "field" : "data.url"
            }
        }
    }
}'

Prints out a large glob of json that does not have what I need. 打印出一大堆没有我需要的json。

In a perfect world, I'd like to concatenate every message field from each data.url and fetch that. 在理想的情况下,我想将每个data.url中的每个消息字段连接起来并获取它。

In SQL, an individual query for this might look something like: 在SQL中,对此的单个查询可能类似于:

SELECT message FROM jenkins-logstash WHERE data.url='job/playground/36' ORDER BY ASC

Where 'job/playground/36' is one example of every data.url. 其中“ job / playground / 36”是每个data.url的一个示例。

How can I go about doing this? 我该怎么做呢?

Update: Better answer than before. 更新:比以前更好的答案。

I still ended up using FileBeat, but with ELK v6.5.+ Kibana has a logs UI! 我仍然最终使用FileBeat,但是使用ELK v6.5。+ Kibana有一个日志UI! https://www.elastic.co/guide/en/kibana/current/logs-ui.html https://www.elastic.co/guide/en/kibana/current/logs-ui.html

The default config from FileBeat works fine with it. FileBeat的默认配置可以正常使用。

__ __

Old answer: 旧答案:

I ended up solving this by using FileBeat to harvest all logs, and then using the Kibana Log Viewer to watch each one. 我最终通过使用FileBeat收集所有日志,然后使用Kibana日志查看器观看每个日志来解决此问题。 I filtered based on source and then used the path where the log was going to be. 我根据source过滤,然后使用了日志的路径。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM