简体   繁体   English

Logstash,如何使用来自事件数据的grok模式

[英]Logstash, how to use grok patterns coming from event data

I have an ELK stack deployed on kubernetes used to collect containers' data. 我在kubernetes上部署了一个ELK堆栈,用于收集容器的数据。 Among all the rest, it is using a grok filter to parse the actual log line based on a pattern. 在所有其他方法中,它使用grok过滤器根据模式分析实际的日志行。

My wish is to be able to setup this pattern by using an annotation in the kubernetes pod. 我希望能够通过使用kubernetes窗格中的注释来设置此模式。

I added an annotation called elk-grok-pattern in the pod , configured filebeat in order to forward the annotation and I can get the annotation value as a field in my event in logstash , so far so good. 我在pod添加了一个名为elk-grok-pattern的注释,配置了filebeat以便转发该注释,到目前为止,我在logstash中的event中可以将注释值作为字段获取。

The problem is that I am unable to use the value of my field as a grok pattern . 问题是我无法将字段的值用作grok pattern

The annotation in my pod looks like this: 我的广告连播中的注解如下:

Annotations:    elk-grok-pattern=%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:status} %{NUMBER:response_time}

The filter I am trying to use is similar to the following: 我尝试使用的filter类似于以下内容:

filter {
  # create a new field called "elk-grok-pattern" from the pod annotation
  mutate {
        rename => { "[kubernetes][annotations][elk-grok-pattern]" => "elk-grok-pattern" }
  }

  grok {
    pattern_definitions => {
      "CUSTOM" => "%{elk-grok-pattern}"
    }
    match => { "log" => "%{CUSTOM}" }
  }
}

Unluckily this leads to an error: 不幸的是,这会导致错误:

Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Grok::PatternError: pattern %{elk-grok-pattern} not defined>

In practice, grok is interpreting my pattern literally, and not evaluating the string content coming from the event. 实际上,grok是按字面解释我的模式,而不是评估事件产生的字符串内容。

I also tried using the pattern directly, withoud defining a pattern_definition, like this: 我还尝试直接使用模式,而无需定义pattern_definition,如下所示:

grok {
  match => { "log" => "%{elk-grok-pattern}" }
}

But I get the same exact error. 但是我得到同样的错误。

Is there a way to accomplish my goal? 有没有办法实现我的目标? Any advice or possible workaround would be very appreciated. 任何建议或可能的解决方法将不胜感激。

If you don't wish to use this pattern in other places, why not just use it in the match like this? 如果您不希望在其他地方使用此模式,为什么不在这样的比赛中使用它呢?

grok {
  match => { "log" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:status} %{NUMBER:response_time}" }
}

If you want to use it later in other filters, check out this page on pattern creation: 如果您想稍后在其他过滤器中使用它,请查看此页面上的模式创建:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#setting_patterns_dir https://www.elastic.co/guide/zh-CN/logstash/current/plugins-filters-grok.html#setting_patterns_dir

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何从 logstash 连接 logstash? - How to connect logstash from logstash? Logstash输出来自另一个输入 - Logstash output is from another input 如何将运行在GCP之上的Kubernetes上的pod中的日志发送到elasticsearch / logstash? - How to ship logs from pods on Kubernetes running on top of GCP to elasticsearch/logstash? 这些kubernetes健康检查来自何处? - Where are these kubernetes healthchecks coming from? 如何在 kubernetes 上部署具有持久卷的 Logstash? - How to deploy logstash with persistent volume on kubernetes? 如何区分logstash中的RAM和堆使用情况? - How to differentiate between RAM and heap usage in logstash? 如何在基于Linux的VM上运行的filebeat和在kubernetes中运行的logstash之间建立连接(logstash通过入口公开) - how to establish connectivity between filebeat running on a linux based VM and logstash running in kubernetes(logstash exposed through ingress) 这个 Dockerfile 如何在没有入口点或 cmd 的情况下实际运行 logstash? - How does this Dockerfile actually run logstash without an entrypoint or cmd? 如何在kubernetes的单个filebeat DaemonSet中声明多个output.logstash? - How to declare multiple output.logstash in single filebeat DaemonSet in kubernetes? 在K8S中,创建新机密时如何使用现有机密中的数据? - In K8S, how to use data from existed secret when creating new secret?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM