[英]Logstash split log and insert it separately into elasticsearch
I am writing a logstash config file and a log I am receiving is giving me issues, the team sent me multiple logs merged into one eg.我正在编写一个 logstash 配置文件,我收到的日志给我带来了问题,团队向我发送了多个日志合并为一个,例如。
message: [logitem(aaa=1, bbb=1, ccc=1), logitem(aaa=2, bbb=2, ccc=2), logitem(aaa=3, bbb=3, ccc=3)]消息:[logitem(aaa=1, bbb=1, ccc=1), logitem(aaa=2, bbb=2, ccc=2), logitem(aaa=3, bbb=3, ccc=3)]
Would it be possible to split these log into 3 and insert them individually into elasticsearch?是否可以将这些日志分成 3 个并将它们单独插入到 elasticsearch 中? (3 records)
(3 条记录)
This way should work (see comments below for discussion and refs).这种方式应该有效(有关讨论和参考,请参阅下面的评论)。 You might need to tweak the grok / scan regexes in a couple of places.
您可能需要在几个地方调整 grok / scan 正则表达式。
grok {
match => {
"message" => "^\[%{GREEDYDATA:logitems}\]$"
}
}
ruby {
code => "event.set('logitem', event.get('message').scan(/logitem\([^\)]+\)/))"
}
split {
field => "logitem"
}
grok {
match => {
"logitem" => "^logitem\(aaa=%{DATA:field_a}, bbb=%{DATA:field_b}, ccc=%{DATA:field_c}\)"
}
}
The intention of the scan regex is to match a string that:扫描正则表达式的目的是匹配一个字符串:
logitem
logitem
(
character(
字符)
)
)
)
结尾 This way, surprisingly, does not work.令人惊讶的是,这种方式行不通。 See this github issue for more detail.
有关更多详细信息,请参阅此 github 问题。 TL;DR... grok will not put repeated matches into an array.
TL;DR... grok 不会将重复的匹配项放入数组中。
filter {
grok {
match => {
"message" => "^\[*(logitem\(%{DATA:logitem}\), )*logitem\(%{DATA:logitem}\)\]$"
}
}
split {
field => "logitem"
}
}
If you are sure the messages will always have the aaa=, bbb=
format you could be more explicit.如果您确定消息将始终具有
aaa=, bbb=
格式,您可以更明确。
[edits 1: marked grok method as non-working and added ruby method. [编辑 1:将 grok 方法标记为非工作并添加了 ruby 方法。 2: reordered a couple of things for better flow]
2:重新排序一些东西以获得更好的流程]
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.