繁体   English   中英

带JSON编解码器的Logstash TCP输入将每行视为一个单独的事件

[英]Logstash TCP input w/ JSON codec treats each line as a separate event

我正在尝试使用Logstash JSON编解码器通过Logstash TCP套接字读取log4j v2.3 JSON输出,但是Logstash将每行视为要编制索引的单独事件,而不是将每个JSON对象读取为事件。

log4j配置

<Appenders>
    <Console name="console" target="SYSTEM_OUT">
        <PatternLayout pattern="%d %p [%c] - &lt;%m&gt;%n"/>
    </Console>
    ... removed for brevity ...
    <Socket name="logstash" host="localhost" port="4560">
      <JSONLayout />
    </Socket>
</Appenders>
<Loggers>
    <Logger name="org.jasig" level="info" additivity="false">
        <AppenderRef ref="console"/>
        <AppenderRef ref="file"/>
        <AppenderRef ref="logstash"/>
    </Logger>
    ... removed for brevity ...
    <Root level="error">
        <AppenderRef ref="console"/>
        <AppenderRef ref="logstash"/>
    </Root>
</Loggers>

Logstash配置

input {
  tcp {
      port => 4560
      codec => json
  }
}
output {
  elasticsearch {}
  stdout {}
}

Logstash输出

每行甚至被单独分析,而不是将整个JSON对象视为单个事件。

2016-03-22T01:24:27.213Z 127.0.0.1 {
2016-03-22T01:24:27.215Z 127.0.0.1   "timeMillis" : 1458609867060,
2016-03-22T01:24:27.216Z 127.0.0.1   "thread" : "localhost-startStop-1",
2016-03-22T01:24:27.217Z 127.0.0.1   "level" : "INFO",
2016-03-22T01:24:27.218Z 127.0.0.1   "loggerName" : "com.hazelcast.instance.DefaultAddressPicker",
2016-03-22T01:24:27.219Z 127.0.0.1   "message" : "[LOCAL] [dev] [3.5] Resolving domain name 'wozniak.local' to address(es): [192.168.0.16, fe80:0:0:0:6203:8ff:fe89:6d3a%4]\n",
2016-03-22T01:24:27.220Z 127.0.0.1   "endOfBatch" : false,
2016-03-22T01:24:27.221Z 127.0.0.1   "loggerFqcn" : "org.apache.logging.slf4j.Log4jLogger"
2016-03-22T01:24:27.222Z 127.0.0.1 }
2016-03-22T01:24:32.281Z 127.0.0.1 {
2016-03-22T01:24:32.283Z 127.0.0.1   "timeMillis" : 1458609872279,
2016-03-22T01:24:32.286Z 127.0.0.1   "thread" : "localhost-startStop-1",
2016-03-22T01:24:32.287Z 127.0.0.1   "level" : "WARN",
2016-03-22T01:24:32.289Z 127.0.0.1   "loggerName" : "com.hazelcast.instance.DefaultAddressPicker",
2016-03-22T01:24:32.294Z 127.0.0.1   "message" : "[LOCAL] [dev] [3.5] Cannot resolve hostname: 'Jons-MacBook-Pro-2.local'\n",
2016-03-22T01:24:32.299Z 127.0.0.1   "endOfBatch" : false,
2016-03-22T01:24:32.302Z 127.0.0.1   "loggerFqcn" : "org.apache.logging.slf4j.Log4jLogger"
2016-03-22T01:24:32.307Z 127.0.0.1 }

预先感谢您的帮助。

好吧,我得到了这个工作。 这不是我想要解决的方式,但是它确实起作用。

我没有使用json编解码器,而是将multiline编解码器用于输入和json过滤器。

logstash配置

input {
  tcp {
      port => 4560
      codec => multiline {
        pattern => "^\{$"
        negate => true
        what => previous
      }  
  }
}

filter {
  json { source => message }
}

output {
  elasticsearch {}
  stdout {}
}

这是正确索引的输出

2016-03-22T09:42:26.880Z 127.0.0.1 0 expired tickets found to be removed.
2016-03-22T09:43:26.992Z 127.0.0.1 Finished ticket cleanup.
2016-03-22T09:43:47.120Z 127.0.0.1 Setting path for cookies to: /cas/ 
2016-03-22T09:43:47.122Z 127.0.0.1 AcceptUsersAuthenticationHandler successfully authenticated hashbrowns+password
2016-03-22T09:43:47.131Z 127.0.0.1 Authenticated hashbrowns with credentials [hashbrowns+password].
2016-03-22T09:43:47.186Z 127.0.0.1 Audit trail record BEGIN
=============================================================
WHO: hashbrowns+password
WHAT: supplied credentials: [hashbrowns+password]
ACTION: AUTHENTICATION_SUCCESS
APPLICATION: CAS
WHEN: Tue Mar 22 05:43:47 EDT 2016
CLIENT IP ADDRESS: 0:0:0:0:0:0:0:1
SERVER IP ADDRESS: 0:0:0:0:0:0:0:1
=============================================================

这似乎有点脆弱,因为它依靠log4j格式化json的方式,所以我还是很想听听如何让json编解码器与多行json输出一起工作。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM