繁体   English   中英

用于 Mongodb 慢查询的 Logstash Grok 过滤器

[英]Logstash Grok Filter for Mongodb Slow Query

我正在尝试下面的 grok 过滤器模式,它在grok 调试器上工作,但在部署到 logstash 时失败。

图案:

'%{GREEDYDATA}:"%{TIMESTAMP_ISO8601:timestamp}%{GREEDYDATA}"s":"%{WORD:severity}",%{SPACE}"c":"%{WORD:component}",%{SPACE}"id":%{NUMBER:id},%{SPACE}"ctx":%{QUOTEDSTRING:context},"msg":%{QUOTEDSTRING:msg},"attr":{"remote":"%{IPV4:client_ip}:%{NUMBER:port}","connectionId":%{NUMBER:connection_id},"connectionCount":%{NUMBER:connection_count}%{GREEDYDATA}',

输入:

{"t":{"$date":"2020-11-09T09:51:41.936+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn2468512","msg":"Connection ended","attr":{"remote":"172.21.41.24:58546","connectionId":2468512,"connectionCount":1617}}

日志存储错误:

{"level":"ERROR","loggerName":"logstash.agent","timeMillis":1604933044844,"thread":"Converge PipelineAction::Create<main>","logEvent":{"message":"Failed to execute action","action":{"metaClass":{"metaClass":{"metaClass":{"action":"PipelineAction::Create<main>","exception":"LogStash::ConfigurationError","message":"Expected one of [ \\t\\r\\n], \"#\", [A-Za-z0-9_-], '\"', \"'\", [A-Za-z_], \"-\", [0-9], \"[\", \"{\" at line 26, column 9 (byte 13997) after filter {\n  if [container][image] =~ \"mongodb\" {\n    grok {\n      patterns_dir => [\"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns\"]\n      match => { \"message\" => [\n        '%{GREEDYDATA}:\"%{TIMESTAMP_ISO8601:timestamp}%{GREEDYDATA}\"s\":\"%{WORD:severity}\",%{SPACE}\"c\":\"%{WORD:component}\",%{SPACE}\"id\":%{NUMBER:id},%{SPACE}\"ctx\":%{QUOTEDSTRING:context},\"msg\":%{QUOTEDSTRING:msg},\"attr\":{\"remote\":\"%{IPV4:client_ip}:%{NUMBER:port}\",\"connectionId\":%{NUMBER:connection_id},\"connectionCount\":%{NUMBER:connection_count}%{GREEDYDATA}',\n        ","backtrace":["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'","org/logstash/execution/AbstractPipelineExt.java:183:in `initialize'","org/logstash/execution/JavaBasePipelineExt.java:69:in `initialize'","/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'","/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'","/usr/share/logstash/logstash-core/lib/logstash/agent.rb:357:in `block in converge_state'"]}}}}}}

这是我正在使用的 conf 文件

filter {
  if [container][image] =~ "mongodb" {
    grok {
      patterns_dir => ["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns"]
      match => { "message" => [
        '%{GREEDYDATA}:"%{TIMESTAMP_ISO8601:timestamp}%{GREEDYDATA}"s":"%{WORD:severity}",%{SPACE}"c":"%{WORD:component}",%{SPACE}"id":%{NUMBER:id},%{SPACE}"ctx":%{QUOTEDSTRING:context},"msg":%{QUOTEDSTRING:msg},"attr":{"remote":"%{IPV4:client_ip}:%{NUMBER:port}","connectionId":%{NUMBER:connection_id},"connectionCount":%{NUMBER:connection_count}%{GREEDYDATA}',
        ]
        break_on_match => false
        tag_on_failure => ["failed_match"]
      }
    }
  }
}

如果有人有办法解决这个问题,请告诉我。 TIA

您的grok过滤器在match选项中缺少一个结束大括号。

grok {
  patterns_dir => ["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns"]
  match => { "message" => ['%{GREEDYDATA}:"%{TIMESTAMP_ISO8601:timestamp}%{GREEDYDATA}"s":"%{WORD:severity}",%{SPACE}"c":"%{WORD:component}",%{SPACE}"id":%{NUMBER:id},%{SPACE}"ctx":%{QUOTEDSTRING:context},"msg":%{QUOTEDSTRING:msg},"attr":{"remote":"%{IPV4:client_ip}:%{NUMBER:port}","connectionId":%{NUMBER:connection_id},"connectionCount":%{NUMBER:connection_count}%{GREEDYDATA}']}
  break_on_match => false
  tag_on_failure => ["failed_match"]
}

但是你的消息已经是一个json对象,你不需要使用grok你可以使用json 过滤器来解析消息和mutate 过滤器来重命名解析后的字段。

尝试这样的事情

json {
    source => "message"
}
mutate {
    rename => ["[t][$date]","timestamp"]
    rename => ["s","severity"]
    rename => ["c","component"]
    ... the rest of your fields ...
}

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM