简体   繁体   English

Log4j2 将多行日志堆栈跟踪转换为单行

[英]Log4j2 converting multiline logs stacktraces into single line

I'm trying to push the logs my Elasticserver logs to rsys and then FLuentd.我正在尝试将我的 Elasticserver 日志的日志推送到 rsys,然后再推送到 Fluentd。 For this the stacktrace error logs should be in one line.为此,堆栈跟踪错误日志应该在一行中。

It was multiline before之前是多行

443 [2022-08-05T07:45:38,068][ERROR][o.e.i.g.GeoIpDownloader  ] [techsrv01] exception during geoip databases update
   444  org.elasticsearch.ElasticsearchException: not all primary shards of [.geoip_databases] index are active
   445      at org.elasticsearch.ingest.geoip.GeoIpDownloader.updateDatabases(GeoIpDownloader.java:137) ~[ingest-geoip-7.17.5.jar:7.17.5]
   446      at org.elasticsearch.ingest.geoip.GeoIpDownloader.runDownloader(GeoIpDownloader.java:284) [ingest-geoip-7.17.5.jar:7.17.5]
   447      at org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:100) [ingest-geoip-7.17.5.jar:7.17.5]
   448      at org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:46) [ingest-geoip-7.17.5.jar:7.17.5]
   449      at org.elasticsearch.persistent.NodePersistentTasksExecutor$1.doRun(NodePersistentTasksExecutor.java:42) [elasticsearch-7.17.5.jar:7.17.5]
   450      at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:777) [elasticsearch-7.17.5.jar:7.17.5]
   451      at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.17.5.jar:7.17.5]
   452      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
   453      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
   454      at java.lang.Thread.run(Thread.java:833) [?:?]

After changing the pattern layout in log4j2.properties in the below format.以以下格式更改 log4j2.properties 中的模式布局后。 I'm able to get it into two lines.我可以把它分成两行。 But I'm not able to convert it more into single line.但我无法将它更多地转换为单行。

appender.rolling_old.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}][%node_name] %marker %m %n %throwable{separator(|)} appender.rolling_old.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}][%node_name] %marker %m %n %throwable{separator(|)}


2028     [2022-08-05T11:04:40,810][ERROR][o.e.i.g.GeoIpDownloader  ][techsrv01]  exception during geoip databases update
      2029   ElasticsearchException[not all primary shards of [.geoip_databases] index are active]| at org.elasticsearch.ingest.geoip.GeoIpDownloader.updateDatabases(GeoIpDownloader.java:137)|    at org.elasticsearch.ingest.geoip.GeoIpDownloader.runDownloader(GeoIpDownloader.java:284)|  at org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:100)|  at org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:46)|   at org.elasticsearch.persistent.NodePersistentTasksExecutor$1.doRun(NodePersistentTasksExecutor.java:42)|   at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:777)|  at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)| at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)|   at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)|   at java.base/java.lang.Thread.run(Thread.java:833)[2022-08-05T11:04:41,171][INFO ][o.e.c.r.a.AllocationService][techsrv01]  Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-2022.07.18-000001][0], [.kibana-event-log-7.17.5-000001][0], [.geoip_databases][0], [.ds-.logs-deprecation.elasticsearch-default-2022.07.18-000001][0]]]).

How can we achieve this using Log4j2 layout pattern?我们如何使用 Log4j2 布局模式来实现这一点?

Instead of converting the logs in one single line using log4j2.而不是使用 log4j2 将日志转换为一行。 I used the default log pattern.我使用了默认的日志模式。 And I ditched RSYS instead used FluentD directly to parse the logs, below configuration will only filter warn and error and not info我放弃了 RSYS,而是直接使用 FluentD 来解析日志,下面的配置只会过滤警告错误,而不是信息

td-agent.conf td-agent.conf

<source>
  @type tail
  path /var/log/elasticsearch/elasticdemo.log
  pos_file /var/log/elasticsearch/elasticdemo.log.pos
  tag elastic_error_self
  <parse>
    @type multiline
    format_firstline /(\d{4})-(\d\d)-(\d\d)/
    format1 /^(?<timestamp>\[.*?\])(?<logLevel>\[.*?\])(?<service>\[.*?\]) (?<node_name>\[.*?\])(?<message>.*)/
  </parse>
</source><filter **>
  @type grep
  <exclude>
    key logLevel
    pattern /INFO/
    # or, to exclude all messages that are empty or include only white-space
  </exclude>
</filter><match elastic**>
  @type elasticsearch
  host elasticIP/lbip/vmip #where elastic is installed
  port 9200
  index_name elastic_error_self
  include_timestamp true #connection configs
    reconnect_on_error true
    reload_on_failure true
    slow_flush_log_threshold 90  # buffer configs
  <buffer>
    @type file
    path /data/opt/fluentd/buffer/elastic_error_self
    chunk_limit_size 32MB
    total_limit_size 20GB
    flush_thread_count 8
    flush_mode interval
    retry_type exponential_backoff
    retry_timeout 10s
    retry_max_interval 30
    overflow_action drop_oldest_chunk
    flush_interval 5s
  </buffer>
</match>

弹性完整的多行日志随堆栈跟踪一起提供

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM