简体   繁体   中英

Is it possible to maintain the index name from FileBeat to LogStash for Elasticsearch?

I'm really new to ELK and I have set up an ELK stack where FileBeat sends the logs to LogStash for some processing and then outputs to Elasticsearch.

I was wondering if it is possible to maintain the index name set in filebeat.yml all the way to Elasticsearch. The reason why I want that is because I want multiple indices for different types of app servers that I have generating logs. If I leave out index in logstash.conf, it defaults; but if I specify something, obviously that takes effect. I simply want it to use what was set in FileBeat.

Or is there some way to configure multiple output sections where log types can be evaluated so I can name them appropriately?

filebeat.yml

# Optional index name. The default index name is set to filebeat in all lowercase.
  index: "something-%{+yyyy.MM.dd}"

logstash.conf

output {
  elasticsearch { 
    hosts => ["somehost:12345"]
    index => "my_filebeat_index_name_would_be_preferred-%{+yyyy-MM-dd}"
  }
}

I would like to continue to use LogStash because I have custom GROK patterns etc and not to go directly to Elastic. Any help would be greatly appreciated.

Thanks.

The index name you can specify in the filebeat.yml only applies for the elasticsearch output since filebeat connects to your cluster directly. However if you use Logstash as your filebeat-destination this is not possible.

Q: Or is there some way to configure multiple output sections where log types can be evaluated so I can name them appropriately?

Yes, this is absolutely possible within a Logstash pipeline (and somewhat common). So first of all you need to set certain criteria/marks to your logs in order to let Logstash choose the correct elasticsearch output (and with that the correct index). You can achieve this via tags. So for example all logs/events of category A get the tag "tag_A" (you can set them individually for every log source in the particular log inputs or in general in the filebeat.yml).

The next step is to implement the evaluation of the tag-values in the logstash pipeline. You would do it like in the following:

output{
  if "tag_A" in [tags]{
    elasticsearch {
      hosts => ["somehost:12345"]
      index => "index-A-%{+yyyy-MM-dd}"
    }
  }
  else if "tag_B" in [tags]{
    elasticsearch {
      hosts => ["somehost:12345"]
      index => "index-B-%{+yyyy-MM-dd}"
    }
  }
}

This if-else structure lets you index your data into various indizes.

I hope I could help you.

EDIT:

Your evaluation is not limited to tags. You can evaluate any fields that are contained in your documents, eg the filename, hostname etc.

You may want to take a look at this reference ( https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html ) on how to access field values, do conditionials etc. in Logstash configurations.

EDIT 2:

More sophisticated would be to add a field to your documents that contains the exact destination index name (via the fields options in the log inputs or in filebeat.yml). With this approach theres no need for the evaluation in the Logstash pipeline anymore since you dynamically set the value for the index setting from the field value.

Assuming you label this field destination_index then you could implement the output plugin like the following:

output{
  if [destination_index]{   #optional check for field's existance in document
    elasticsearch {
      hosts => ["somehost:12345"]
      index => "%{[destination_index]}-%{+yyyy-MM-dd}"
    }
  }
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM