简体   繁体   English

是否可以为 Elasticsearch 维护从 FileBeat 到 LogStash 的索引名称?

[英]Is it possible to maintain the index name from FileBeat to LogStash for Elasticsearch?

I'm really new to ELK and I have set up an ELK stack where FileBeat sends the logs to LogStash for some processing and then outputs to Elasticsearch.我对 ELK 真的很陌生,我已经设置了一个 ELK 堆栈,FileBeat 将日志发送到 LogStash 进行一些处理,然后输出到 Elasticsearch。

I was wondering if it is possible to maintain the index name set in filebeat.yml all the way to Elasticsearch.我想知道是否可以将 filebeat.yml 中设置的索引名称一直保持到 Elasticsearch。 The reason why I want that is because I want multiple indices for different types of app servers that I have generating logs.我想要它的原因是因为我想要生成日志的不同类型的应用服务器的多个索引。 If I leave out index in logstash.conf, it defaults;如果我在logstash.conf 中省略了索引,则它是默认值; but if I specify something, obviously that takes effect.但如果我指定一些东西,显然这会生效。 I simply want it to use what was set in FileBeat.我只是想让它使用 FileBeat 中设置的内容。

Or is there some way to configure multiple output sections where log types can be evaluated so I can name them appropriately?或者有什么方法可以配置多个输出部分,其中可以评估日志类型,以便我可以适当地命名它们?

filebeat.yml文件节拍.yml

# Optional index name. The default index name is set to filebeat in all lowercase.
  index: "something-%{+yyyy.MM.dd}"

logstash.conf日志文件

output {
  elasticsearch { 
    hosts => ["somehost:12345"]
    index => "my_filebeat_index_name_would_be_preferred-%{+yyyy-MM-dd}"
  }
}

I would like to continue to use LogStash because I have custom GROK patterns etc and not to go directly to Elastic.我想继续使用 LogStash,因为我有自定义 GROK 模式等,而不是直接使用 Elastic。 Any help would be greatly appreciated.任何帮助将不胜感激。

Thanks.谢谢。

The index name you can specify in the filebeat.yml only applies for the elasticsearch output since filebeat connects to your cluster directly.您可以在 filebeat.yml 中指定的索引名称仅适用于 elasticsearch 输出,因为 filebeat 直接连接到您的集群。 However if you use Logstash as your filebeat-destination this is not possible.但是,如果您使用 Logstash 作为您的文件节拍目标,这是不可能的。

Q: Or is there some way to configure multiple output sections where log types can be evaluated so I can name them appropriately?问:或者有什么方法可以配置多个输出部分,在这些部分可以评估日志类型,以便我可以适当地命名它们?

Yes, this is absolutely possible within a Logstash pipeline (and somewhat common).是的,这在 Logstash 管道中是绝对可能的(并且有些常见)。 So first of all you need to set certain criteria/marks to your logs in order to let Logstash choose the correct elasticsearch output (and with that the correct index).因此,首先您需要为日志设置某些标准/标记,以便让 Logstash 选择正确的 elasticsearch 输出(以及正确的索引)。 You can achieve this via tags.您可以通过标签实现这一点。 So for example all logs/events of category A get the tag "tag_A" (you can set them individually for every log source in the particular log inputs or in general in the filebeat.yml).因此,例如所有类别 A 的日志/事件都获得标签“tag_A”(您可以在特定日志输入中或一般在 filebeat.yml 中为每个日志源单独设置它们)。

The next step is to implement the evaluation of the tag-values in the logstash pipeline.下一步是在 logstash 管道中实现标签值的评估。 You would do it like in the following:你会这样做:

output{
  if "tag_A" in [tags]{
    elasticsearch {
      hosts => ["somehost:12345"]
      index => "index-A-%{+yyyy-MM-dd}"
    }
  }
  else if "tag_B" in [tags]{
    elasticsearch {
      hosts => ["somehost:12345"]
      index => "index-B-%{+yyyy-MM-dd}"
    }
  }
}

This if-else structure lets you index your data into various indizes.这种 if-else 结构使您可以将数据索引到各种索引中。

I hope I could help you.我希望我能帮助你。

EDIT:编辑:

Your evaluation is not limited to tags.您的评估不仅限于标签。 You can evaluate any fields that are contained in your documents, eg the filename, hostname etc.您可以评估文档中包含的任何字段,例如文件名、主机名等。

You may want to take a look at this reference ( https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html ) on how to access field values, do conditionials etc. in Logstash configurations.您可能想看看这个参考( https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html )关于如何在 Logstash 中访问字段值、执行条件等配置。

EDIT 2:编辑2:

More sophisticated would be to add a field to your documents that contains the exact destination index name (via the fields options in the log inputs or in filebeat.yml).更复杂的方法是向包含确切目标索引名称的文档添加一个字段(通过日志输入或 filebeat.yml 中的字段选项)。 With this approach theres no need for the evaluation in the Logstash pipeline anymore since you dynamically set the value for the index setting from the field value.使用这种方法,不再需要在 Logstash 管道中进行评估,因为您可以从字段值动态设置索引设置的值。

Assuming you label this field destination_index then you could implement the output plugin like the following:假设您标记此字段destination_index那么您可以实现如下输出插件:

output{
  if [destination_index]{   #optional check for field's existance in document
    elasticsearch {
      hosts => ["somehost:12345"]
      index => "%{[destination_index]}-%{+yyyy-MM-dd}"
    }
  }
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM