[英]How to extract fields from existing logs (fluent-bit in ECS)
I have configured Fluent-bit on my ECS cluster.我已经在我的 ECS 集群上配置了 Fluent-bit。 I can see the logs in Kibana.
我可以在 Kibana 中看到日志。 But all the log data are sent to a single field "log".
但是所有日志数据都发送到单个字段“日志”。 How can I extract each field into a separate field.
如何将每个字段提取到单独的字段中。 There is a solution for fluentd already in this question.
在这个问题中已经有一个 fluentd 的解决方案。
But how can I achieve the same with fluent-bit?但是我怎样才能用 fluent-bit 达到同样的效果呢?
There is a solution in Kuberntetes with fluent-bit: https://docs.fluentbit.io/manual/filter/kubernetes在 fluent-bit 的 Kuberntetes 中有一个解决方案: https://docs.fluentbit.io/manual/filter/kubernetes
How do I achieve the same thing in ECS?如何在 ECS 中实现相同的目标?
Generally fluent-bit send exactly docker log file that taking from /var/lib/docker/containers/*/*.log
You can browse this path on your machine and see that it contains JSON strings with exactly two fields you mentioned.通常 fluent-bit 准确发送 docker 日志文件,该日志文件取自
/var/lib/docker/containers/*/*.log
您可以在您的机器上浏览此路径,并查看它包含 JSON 字符串,其中恰好包含您提到的两个字段。
From here you have number ways, I'll discover two that I know well:从这里你有很多方法,我会发现两种我很熟悉的方法:
You should know well the log structure.你应该很清楚日志结构。 This helps you to create the right filters pipeline for the parse log field.
这有助于您为解析日志字段创建正确的过滤器管道。 Usually, people use filter plugins for this.
通常,人们为此使用过滤器插件。 If you add log examples I will be able to make an example of a filter like this
如果您添加日志示例,我将能够制作这样的过滤器示例
Use the elasticsearch ingest node .使用elasticsearch 摄取节点。
You should know well the log structure.你应该很清楚日志结构。 For be able easy to create processors pipeline for parse log field.
为了能够轻松地为解析日志字段创建处理器管道。 More one time, specific log examples help's us to help you.
更多一次,具体的日志示例帮助我们帮助您。
The most used filter/processor is grok filter / processor .最常用的过滤器/处理器是 grok 过滤器/ 处理器。 This tool have a lot of options for parse structured text from any log.
这个工具有很多选项可以从任何日志中解析结构化文本。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.