[英]How to setup multiline java stack trace in Logstash and grok filter?
I'm trying to setup multiline in grok filter (I'm using Filebeats) in order to parse java stack trace. 我正在尝试在grok过滤器中设置多行(我正在使用Filebeats)以解析Java堆栈跟踪。
Currently I able to parse the following log: 目前,我能够解析以下日志:
08/12/2016 14:17:32,746 [ERROR] [nlp.rvp.TTEndpoint] (Thread-38 ActiveMQ-client-global-threads-1048949322) [d762103f-eee0-4dbb-965f-9f8fb500cf92] ERROR: Not found: v1/t/auth/login: Not found: v1/t/auth/login
at nlp.exceptions.nlpException.NOT_FOUND(nlpException.java:147)
at nlp.utils.Dispatcher.forwardVersion1(Dispatcher.java:342)
at nlp.utils.Dispatcher.Forward(Dispatcher.java:189)
at nlp.utils.Dispatcher$Proxy$_$$_WeldSubclass.Forward$$super(Unknown Source)
at sun.reflect.GeneratedMethodAccessor171.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:49)
but the result can't show the java stack trace (which begins with at java...) 但结果无法显示Java堆栈跟踪(以java ...开头)
This is the Grok Debugger output (as you can see, the java stack trace is missing): 这是Grok Debugger的输出(如您所见,缺少Java堆栈跟踪):
{
"date": "08/12/2016",
"loglevel": "ERROR",
"logger": "nlp.rvp.TTEndpoint",
"time": "14:17:32,746",
"thread": "Thread-38 ActiveMQ-client-global-threads-1048949322",
"message": "ERROR: Not found: v1/t/auth/login: Not found: v1/t/auth/login\r",
"uuid": "d762103f-eee0-4dbb-965f-9f8fb500cf92"
}
This is the configuration of Filebeats (the log shipper): 这是Filebeats(日志发送程序)的配置:
filebeat:
prospectors:
-
paths:
- /var/log/test
input_type: log
document_type: syslog
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["192.168.1.122:5044"]
bulk_max_size: 8192
compression_level: 3
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
This is the configuration of Logstash 这是Logstash的配置
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{DATE:date} %{TIME:time} \[%{LOGLEVEL:loglevel}%{SPACE}\] \[(?<logger>[^\]]+)\] \((?<thread>[^)]+)\) \[%{UUID:uuid}\] %{GREEDYDATA:message}" }
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
Hope you can help me, so finally, I'll break it out (: Thanks! 希望您能为我提供帮助,所以最后,我将重点介绍一下(:谢谢!
Avoid multiline parsing at logstash level. 避免在logstash级别进行多行解析。 Use filebeat features instead, with the multiline option and related regexp. 使用multibeat选项和相关的regexp代替filebeat功能。 ie 即
multiline.pattern: '^(([0-9]{2}/){2}20[0-9]{2} [0-9]{2}(:[0-9]{2}){2})'
multiline.negate: true
multiline.match: after
See https://www.elastic.co/guide/en/beats/filebeat/master/multiline-examples.html 参见https://www.elastic.co/guide/zh-CN/beats/filebeat/master/multiline-examples.html
Thank you all, I found a solution! 谢谢大家,我找到了解决方案!
My new configuration is: 我的新配置是:
filebeat.yml filebeat.yml
filebeat:
prospectors:
- type: log
paths:
- /var/log/*.log
multiline:
pattern: '^[[:space:]]'
match: after
output:
logstash:
hosts: ["xxx.xx.xx.xx:5044"]
bulk_max_size: 8192
compression_level: 3
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.