[英]Why is my grok filter not parsing my filebeats messages ? I cannot see the logstash parsed field in Kibana (ELK)
我已经配置了Filesbeat
,它能够从filebeat.yml
文件中提供的路径读取新的日志(现在是syslog
),并将其转发到Logstash
,然后它应该解析数据转发到Elasticearch
。
我在 Kibana 事件中的任何地方都没有看到解析的 grok 字段,例如 syslog_timestamp、syslog_hostname、syslog_pid,我不知道数据未被解析的原因。
Filebeat 输入文件
Grok 过滤器(在 Logstash 中)
input{
beats{
port => "5044"
}
}
filter {
if[type] == "syslog"{
grok{
match => {"message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}"}
}
date {
match => ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"]
}
}
}
output{
elasticsearch{
hosts => ["10.107.50.205:9200"]
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
}
}
Kibana (Elasticsearch Json)
{
"_index": "filebeat-2019.09.30",
"_type": "_doc",
"_id": "kss7g20B5aLjyCF-6L2B",
"_version": 1,
"_score": null,
"_source": {
"message": "Sep 30 18:33:20 ut012905 metricbeat[46882]: 2019-09-30T18:33:20.254+0100#011INFO#011[monitoring]#011log/log.go:145#011Non-zero metrics in the last 30s#011{\"monitoring\": {\"metrics\": {\"beat\":{\"cpu\":{\"system\":{\"ticks\":770020,\"time\":{\"ms\":80}},\"total\":{\"ticks\":2091400,\"time\":{\"ms\":172},\"value\":2091400},\"user\":{\"ticks\":1321380,\"time\":{\"ms\":92}}},\"handles\":{\"limit\":{\"hard\":4096,\"soft\":1024},\"open\":5},\"info\":{\"ephemeral_id\":\"63755af9-7bad-4b09-8909-52e7018409fe\",\"uptime\":{\"ms\":369450706}},\"memstats\":{\"gc_next\":23786560,\"memory_alloc\":12161776,\"memory_total\":453661591544,\"rss\":2052096},\"runtime\":{\"goroutines\":36}},\"libbeat\":{\"config\":{\"module\":{\"running\":0}},\"pipeline\":{\"clients\":3,\"events\":{\"active\":89,\"published\":47,\"total\":47}}},\"metricbeat\":{\"system\":{\"cpu\":{\"events\":3,\"success\":3},\"filesystem\":{\"events\":3,\"success\":3},\"fsstat\":{\"events\":1,\"success\":1},\"load\":{\"events\":3,\"success\":3},\"memory\":{\"events\":3,\"success\":3},\"network\":{\"events\":6,\"success\":6},\"process\":{\"events\":22,\"success\":22},\"process_summary\":{\"events\":3,\"success\":3},\"socket_summary\":{\"events\":3,\"success\":3}}},\"system\":{\"load\":{\"1\":0.04,\"15\":0.01,\"5\":0.04,\"norm\":{\"1\":0.04,\"15\":0.01,\"5\":0.04}}}}}}",
"host": {
"containerized": false,
"name": "ut012905",
"architecture": "x86_64",
"hostname": "ut012905",
"id": "74e969e835cbfe982aa3ed2f5d76fdd9",
"os": {
"platform": "ubuntu",
"name": "Ubuntu",
"version": "16.04.6 LTS (Xenial Xerus)",
"codename": "xenial",
"family": "debian",
"kernel": "4.4.0-161-generic"
}
},
"ecs": {
"version": "1.0.1"
},
"@version": "1",
"agent": {
"id": "afafb888-8d08-4a4b-8f4d-6c64291fb43d",
"version": "7.3.2",
"hostname": "ut012905",
"type": "filebeat",
"ephemeral_id": "57c8f630-00d5-4c88-bf2d-bb1102cd8530"
},
"log": {
"offset": 3218320,
"file": {
"path": "/var/log/syslog"
}
},
"tags": [
"myCluster1",
"beats_input_codec_plain_applied"
],
"input": {
"type": "log"
},
"fields": {
"env": "staging"
},
"@timestamp": "2019-09-30T17:33:23.354Z"
},
"fields": {
"@timestamp": [
"2019-09-30T17:33:23.354Z"
]
},
"sort": [
1569864803354
]
}
document_type
设置已从 6.0 版的 Filebeat 中删除,因为您使用的是 Filebeat 7.3,因此此设置被忽略并且您的消息没有type
字段。
您需要使用fields
来添加新字段并更改管道以根据该字段进行过滤。
您的 filebeat 配置中需要这样的东西。
fields:
type: syslog
然后你需要在 Logstash 中改变你的条件。
if [fields][type] == "syslog"
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.