簡體   English   中英

如何在Logstash和grok過濾器中設置多行Java堆棧跟蹤?

[英]How to setup multiline java stack trace in Logstash and grok filter?

我正在嘗試在grok過濾器中設置多行(我正在使用Filebeats)以解析Java堆棧跟蹤。

目前,我能夠解析以下日志:

08/12/2016 14:17:32,746 [ERROR] [nlp.rvp.TTEndpoint] (Thread-38 ActiveMQ-client-global-threads-1048949322) [d762103f-eee0-4dbb-965f-9f8fb500cf92] ERROR: Not found: v1/t/auth/login: Not found: v1/t/auth/login
        at nlp.exceptions.nlpException.NOT_FOUND(nlpException.java:147)
        at nlp.utils.Dispatcher.forwardVersion1(Dispatcher.java:342)
        at nlp.utils.Dispatcher.Forward(Dispatcher.java:189)
        at nlp.utils.Dispatcher$Proxy$_$$_WeldSubclass.Forward$$super(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor171.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:49)

但結果無法顯示Java堆棧跟蹤(以java ...開頭)

這是Grok Debugger的輸出(如您所見,缺少Java堆棧跟蹤):

{
  "date": "08/12/2016",
  "loglevel": "ERROR",
  "logger": "nlp.rvp.TTEndpoint",
  "time": "14:17:32,746",
  "thread": "Thread-38 ActiveMQ-client-global-threads-1048949322",
  "message": "ERROR: Not found: v1/t/auth/login: Not found: v1/t/auth/login\r",
  "uuid": "d762103f-eee0-4dbb-965f-9f8fb500cf92"
}

這是Filebeats(日志發送程序)的配置:

filebeat:

  prospectors:

    -
      paths:
        - /var/log/test
      input_type: log
      document_type: syslog
  registry_file: /var/lib/filebeat/registry

output:
  logstash:
    hosts: ["192.168.1.122:5044"]
    bulk_max_size: 8192
    compression_level: 3

    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

shipper:

logging:
  files:
    rotateeverybytes: 10485760 # = 10MB

這是Logstash的配置

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{DATE:date} %{TIME:time} \[%{LOGLEVEL:loglevel}%{SPACE}\] \[(?<logger>[^\]]+)\] \((?<thread>[^)]+)\) \[%{UUID:uuid}\] %{GREEDYDATA:message}" }
    }
 }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

希望您能為我提供幫助,所以最后,我將重點介紹一下(:謝謝!

避免在logstash級別進行多行解析。 使用multibeat選項和相關的regexp代替filebeat功能。

multiline.pattern: '^(([0-9]{2}/){2}20[0-9]{2} [0-9]{2}(:[0-9]{2}){2})' 
multiline.negate: true 
multiline.match: after

參見https://www.elastic.co/guide/zh-CN/beats/filebeat/master/multiline-examples.html

謝謝大家,我找到了解決方案!

我的新配置是:

filebeat.yml

filebeat:
  prospectors:
    - type: log
      paths:
        - /var/log/*.log
      multiline:
        pattern: '^[[:space:]]'
        match: after
output:
  logstash:
    hosts: ["xxx.xx.xx.xx:5044"]
    bulk_max_size: 8192
    compression_level: 3

    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

shipper:

logging:
  files:
    rotateeverybytes: 10485760 # = 10MB

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM