简体   繁体   English

用fluentd和winston解析json

[英]Parse json with fluentd and winston

My application generates apche logs as well as JSON data something like this我的应用程序生成 apche 日志以及类似这样的 JSON 数据

{ TableName: 'myTable', CapacityUnits: 0.5 }

I am using winston(3.2.1) as my logger.我正在使用 winston(3.2.1) 作为我的记录器。 In my Kibana, I see each line of JSON as a different entry instead of a single json.在我的 Kibana 中,我将 JSON 的每一行视为不同的条目,而不是单个 json。 Any idea how to solve this?知道如何解决这个问题吗?

My winston code looks like this我的温斯顿代码看起来像这样

const winston = require('winston');

const { format } = winston;

const prettyJson = format.printf((info) => {
  if (info.message.constructor === Object) {
    info.message = JSON.stringify(info.message, null, 2);
    console.log('inside pretyjson', info.message);
  }
  return `${info.level}: ${info.message}`;
});

const logLevel = process.env.LOG_LEVEL || 'debug';

const tsFormat = () => (new Date()).toLocaleTimeString();

const Logger = winston.createLogger({
  level: logLevel,
  transports: [
    new winston.transports.Console({
      timestamp: tsFormat,
      format: format.combine(
        format.colorize(),
        format.prettyPrint(),
        format.splat(),
        format.simple(),
        prettyJson,
      ),
    }),
  ],
});

module.exports = Logger;

My fluentd config looks like this我流利的配置看起来像这样

# Recieve events over http from port 9880
<source>
  @type http
  port 9880
  bind 0.0.0.0
  @log_level debug
</source>
# Recieve events from 24224/tcp
<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>
# We need to massage the data before if goes into the ES
<filter **>
  # We parse the input with key "log" (https://docs.fluentd.org/filter/parser)
  @type parser
  key_name log
  # Keep the original key value pair in the result
  reserve_data true
  <parse>
    # Use apache2 parser plugin to parse the data
    @type multi_format
   <pattern>
      format json
    </pattern>
    <pattern>
      format apache2
    </pattern>
     <pattern>
      format none
    </pattern>
</parse>
</filter>
# Fluentd will decide what to do here if the event is matched
# In our case, we want all the data to be matched hence **
<match **>
# We want all the data to be copied to elasticsearch using inbuilt
# copy output plugin https://docs.fluentd.org/output/copy
  @type copy
  <store>
  # We want to store our data to elastic search using out_elasticsearch plugin
  # https://docs.fluentd.org/output/elasticsearch. See Dockerfile for installation
    @type elasticsearch
    time_key timestamp_ms
    host hostip
    port 9200
    with_transporter_log true
    @log_level debug
    log_es_400_reason true
    # Use conventional index name format (logstash-%Y.%m.%d)
    logstash_format true
    # We will use this when kibana reads logs from ES
    logstash_prefix fluentd
    logstash_dateformat %Y-%m-%d
    flush_interval 1s
    reload_connections false
    reconnect_on_error true
    reload_on_failure true
  </store>
</match>

maybe it's too late, but if you use:也许为时已晚,但如果你使用:

format: format.combine(        
    format.prettyPrint(),        
    prettyJson
  ),

That generates a code with each line of JSON to stdout.这会生成一个包含每行 JSON 到标准输出的代码。 Fluentd read that like different lines. Fluentd 像不同的行一样阅读。 You can delete those lines or use something like fluentd multiline.您可以删除这些行或使用 fluentd multiline 之类的东西。

https://docs.fluentd.org/parser/multiline https://docs.fluentd.org/parser/multiline

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM