简体   繁体   English

Logstash:将日志文件中的复杂多行JSON解析为ElasticSearch

[英]Logstash: Parse Complicated Multiline JSON from log file into ElasticSearch

Let me first say that I have gone through as many examples on here as I could that still do not work. 首先我要说的是,在这里我已经遍历了很多例子,但仍然无法奏效。 I am not sure if it's because of the complicated nature of the JSON in the log file or not. 我不确定是否是因为日志文件中JSON的复杂性。

I am looking to take the example log entry, have Logstash read it in, and send the JSON as JSON to ElasticSearch. 我正在寻找示例日志条目,让Logstash读取它,并将JSON作为JSON发送到ElasticSearch。

Here is what the (shortened) example looks: (简化的)示例如下所示:

[0m[0m16:02:08,685 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 28) JBAS018559: {
"appName": "SomeApp",
"freeMemReqStartBytes": 544577648,
"freeMemReqEndBytes": 513355408,
"totalMem": 839385088,
"maxMem": 1864368128,
"anonymousUser": false,
"sessionId": "zz90g0dFQkACVao4ZZL34uAb",
"swAction": {
    "clock": 0,
    "clockStart": 1437766438950,
    "name": "General",
    "trackingMemory": false,
    "trackingMemoryGcFirst": true,
    "memLast": 0,
    "memOrig": 0
},
"remoteHost": "127.0.0.1",
"remoteAddr": "127.0.0.1",
"requestMethod": "GET",
"mapLocalObjectCount": {
    "FinanceEmployee": {
      "x": 1,
      "singleton": false
    },
    "QuoteProcessPolicyRef": {
      "x": 10,
      "singleton": false
    },
    "LocationRef": {
      "x": 2,
      "singleton": false
    }
},
"theSqlStats": {
    "lstStat": [
      {
        "sql": "select * FROM DUAL",
        "truncated": false,
        "truncatedSize": -1,
        "recordCount": 1,
        "foundInCache": false,
        "putInCache": false,
        "isUpdate": false,
        "sqlFrom": "DUAL",
        "usingPreparedStatement": true,
        "isLoad": false,
        "sw": {
          "clock": 104,
          "clockStart": 1437766438970,
          "name": "General",
          "trackingMemory": false,
          "trackingMemoryGcFirst": true,
          "memLast": 0,
          "memOrig": 0
        },
        "count": 0
      },
      {
        "sql": "select * FROM DUAL2",
        "truncated": false,
        "truncatedSize": -1,
        "recordCount": 0,
        "foundInCache": false,
        "putInCache": false,
        "isUpdate": false,
        "sqlFrom": "DUAL2",
        "usingPreparedStatement": true,
        "isLoad": false,
        "sw": {
          "clock": 93,
          "clockStart": 1437766439111,
          "name": "General",
          "trackingMemory": false,
          "trackingMemoryGcFirst": true,
          "memLast": 0,
          "memOrig": 0
        },
        "count": 0
      }
    ]
    }
}

The Logstash configs I have tried have not worked. 我尝试过的Logstash配置无效。 The one closest so far is: 到目前为止,最接近的是:

input {
    file {
        codec => multiline {
            pattern => '\{(.*)\}'
            negate => true
            what => previous
        }
        path => [ '/var/log/logstash.log' ]
        start_position => "beginning"
        sincedb_path => "/dev/null"
    }
}

filter {
    json {
        source => message
    }
}

output {
    stdout { codec => rubydebug }
    elasticsearch {
        cluster => "logstash"
        index => "logstashjson"
    }
}

I have also tried: 我也尝试过:

input {
    file {
        type => "json"
        path => "/var/log/logstash.log"
        codec => json #also tried json_lines
    }
}

filter {
    json {
        source => "message"
    }
}

output {
    stdout { codec => rubydebug }
    elasticsearch {
        cluster => "logstash"
        codec => "json" #also tried json_lines
        index => "logstashjson"
    }
}

I just want to take the JSON posted above and send it "as is" to ElasticSearch just as if I did a cURL PUT with that file. 我只想获取上面发布的JSON并将其“按原样”发送给ElasticSearch,就像我对该文件进行了cURL PUT一样。 I appreciate any help, thank you! 感谢您的帮助,谢谢!

UPDATE UPDATE

After help from Leonid, here is the configuration I have right now: 在Leonid的帮助下,这是我现在拥有的配置:

input {
    file {
        codec => multiline {
            pattern => "^\["
            negate => true
            what => previous
        }
        path => [ '/var/log/logstash.log' ]
        start_position => "beginning"
        sincedb_path => "/dev/null"
    }
}

filter {
    grok {
        match => { "message" => "^(?<rubbish>.*?)(?<logged_json>{.*)" }
    }
    json {
        source => "logged_json"
        target => "parsed_json"
    }
}

output {
    stdout {
        codec => rubydebug
    }
    elasticsearch {
        cluster => "logstash"
        index => "logstashjson"
    }
}

Sorry, I can't yet make comments, so will post an answer. 抱歉,我无法发表评论,因此将发布答案。 You are missing a document_type in the elaticsearch config, how would it be otherwise deduced? 您在elaticsearch配置中缺少document_type ,否则将如何推断呢?


All right, after looking into the logstash reference and working closely with @Ascalonian we came up with the following config: 好吧,在查看了logstash参考并与@Ascalonian紧密合作之后,我们想到了以下配置:

input { 
    file { 

        # in the input you need to properly configure the multiline codec.
        # You need to match the line that has the timestamp at the start, 
        # and then say 'everything that is NOT this line should go to the previous line'.
        # the pattern may be improved to handle case when json array starts at the first 
        # char of the line, but it is sufficient currently

        codec => multiline { 
            pattern => "^\[" 
            negate => true 
            what => previous 
            max_lines => 2000 
        } 

        path => [ '/var/log/logstash.log'] 
        start_position => "beginning" 
        sincedb_path => "/dev/null" 
    } 
} 

filter { 

    # extract the json part of the message string into a separate field
    grok { 
        match => { "message" => "^.*?(?<logged_json>{.*)" } 
    } 

    # replace newlines in the json string since the json filter below
    # can not deal with those. Also it is time to delete unwanted fields
    mutate { 
        gsub => [ 'logged_json', '\n', '' ] 
        remove_field => [ "message", "@timestamp", "host", "path", "@version", "tags"] 
    } 

    # parse the json and remove the string field upon success
    json { 
        source => "logged_json" 
        remove_field => [ "logged_json" ] 
    } 
} 

output { 
    stdout { 
        codec => rubydebug 
    } 
    elasticsearch { 
        cluster => "logstash" 
        index => "logstashjson" 
    } 
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM