简体   繁体   English

流利的日志解析缺少 java 堆栈跟踪日志的新行 \n

[英]fluentd log parsing missing new line \n for java stacktrace log

i have the below fluentd config used to parse a java stacktrace log:我有以下用于解析 java stacktrace 日志的 fluentd 配置:

  @id fluentd-containers.log
  @type tail
  path /var/log/containers/*.log
  pos_file /var/log/es-containers.log.pos
  tag raw.kubernetes.*
  read_from_head true
  <parse>
    @type multi_format
    <pattern>
      format json
      time_key time
      time_format %Y-%m-%dT%H:%M:%S.%NZ
    </pattern>
    <pattern>
      format multiline
      format_firstline /\d{4}-\d{1,2}-\d{1,2}/
      format1 /^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}) \[(?<thread>.*)\] (?<level>[^\s]+)(?<message>.*)/
    </pattern>
    <pattern>
      format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
      time_format %Y-%m-%dT%H:%M:%S.%N%:z
    </pattern>
  </parse>
</source>

And i was expecting receive that log like shown in the syslog, something like below:我期待收到系统日志中显示的日志,如下所示:

java.lang.NullPointerException: Name is null
    at java.lang.Enum.valueOf(Enum.java:236)
    at sa.com.stcs.tracking.events.ProfileType.valueOf(ProfileType.java:3)
    at sa.com.stcs.geofence.GeofenceProfileRepository.mapToGeofenceProfile(GeofenceProfileRepository.java:110)
    at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
    at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
    at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
    at java.util.Iterator.forEachRemaining(Iterator.java:116)
    at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
    at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)

but unfortunately it comes to kibana through elasticsearch with missing new line \n see pic.但不幸的是,它通过 elasticsearch 进入 kibana,缺少新行\n参见图片。 在此处输入图像描述

Question: where exactly should i introduce the new line special char \n .问题:我到底应该在哪里引入新行 special char \n ?? ??

You should not use \n .你不应该使用\n If you look at the documentation here , format_firstline and format1 means fluentd will consider a new event (ie a new log message) when it finds something matching regex as format_firstline which should be date in your case.如果您查看此处的文档, format_firstlineformat1意味着 fluentd 将在找到与正则表达式匹配的内容时考虑一个新事件(即新日志消息)作为format_firstline ,这在您的情况下应该是日期。 As soon as it finds the date, it parses the log message as per format1 .一旦找到日期,它就会按照format1解析日志消息。 It wouldn't log a new line until a line with date is encountered so you don't have to worry about it.在遇到带有日期的行之前,它不会记录新行,因此您不必担心。

  • The following will parse time (ex: 2019-11-28 22:22:14 ), thread (ex: pool.thread.2 ), level (ex: INFO , ERROR etc) and message (ex. java.lang.... ) until it finds the next line with starting with date.以下将解析时间(例如: 2019-11-28 22:22:14 )、线程(例如: pool.thread.2 )、级别(例如: INFOERROR等)和消息(例如java.lang.... ) 直到找到以日期开头的下一行。

format1 /^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}) \[(?<thread>.*)\] (?<level>[^\s]+)(?<message>.*)

  • If you want to keep it simple, you can only use following which will parse log time and everything else as message:如果你想保持简单,你只能使用以下将解析日志时间和其他所有内容的消息:

format_firstline /\d{4}-\d{1,2}-\d{1,2}/ format1 /^(?<logtime>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}.\d{1,9}) (?<log>.*)/

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM