简体   繁体   English

ELK堆栈中的Logstash和filebeat

[英]Logstash and filebeat in the ELK stack

We are setting up elasticsearch, kibana, logstash and filebeat on a server to analyse log files from many applications. 我们在服务器上设置elasticsearch,kibana,logstash和filebeat来分析来自许多应用程序的日志文件。 Due to reasons* each application log file ends up in a separate directory on the ELK server. 由于原因*每个应用程序日志文件最终都在ELK服务器上的单独目录中。 We have about 20 log files. 我们有大约20个日志文件。

  1. As I understand we can run a logstash pipeline config file for each application log file. 据我所知,我们可以为每个应用程序日志文件运行logstash管道配置文件。 That will be one logstash instance running with 20 pipelines in parallel and each pipeline will need its own beat port. 这将是一个logstash实例,并行运行20个管道,每个管道都需要自己的节拍端口。 Please confirm that this is correct? 请确认这是正确的?
  2. Can we have one filebeat instance running or do we need one for each pipeline/logfile? 我们可以运行一个filebeat实例,还是每个管道/日志文件需要一个?
  3. Is this architecture ok or do you see any major down sides? 这个架构是否正常,或者您是否看到任何重大缺陷?

Thank you! 谢谢!

*There are different vendors responsible for different applications and they run a cross many different OS and many of them will not or can't install anything like filebeats. *有不同的供应商负责不同的应用程序,他们运行跨越许多不同的操作系统,其中许多不会或不能安装任何像filebeats。

We do not recommend reading log files from network volumes. 我们不建议从网络卷读取日志文件。 Whenever possible, install Filebeat on the host machine and send the log files directly from there. 只要有可能,在主机上安装Filebeat并直接从那里发送日志文件。 Reading files from network volumes (especially on Windows) can have unexpected side effects. 从网络卷(尤其是在Windows上)读取文件可能会产生意外的副作用。 For example, changed file identifiers may result in Filebeat reading a log file from scratch again. 例如,更改的文件标识符可能导致Filebeat再次从头开始读取日志文件。

Reference 参考

We always recommend installing Filebeat on the remote servers. 我们始终建议在远程服务器上安装Filebeat。 Using shared folders is not supported. 不支持使用共享文件夹。 The typical setup is that you have a Logstash + Elasticsearch + Kibana in a central place (one or multiple servers) and Filebeat installed on the remote machines from where you are collecting data. 典型的设置是在中央位置(一个或多个服务器)安装Logstash + Elasticsearch + Kibana,并在远程计算机上安装Filebeat,从中收集数据。

Reference 参考

For one filebeat instance running you can apply different configuration settings to different files by defining multiple input sections as below example, check here for more 对于一个运行的filebeat实例,您可以通过定义多个输入节来将不同的配置设置应用于不同的文件,如下例所示, 请在此处查看更多信息

filebeat.inputs:

- type: log

  enabled: true
  paths:
    - 'C:\App01_Logs\log.txt'
  tags: ["App01"]
  fields: 
    app_name: App01

- type: log
  enabled: true
  paths:
    - 'C:\App02_Logs\log.txt'
  tags: ["App02"]
  fields: 
    app_name: App02

- type: log
  enabled: true
  paths:
    - 'C:\App03_Logs\log.txt'
  tags: ["App03"]
  fields: 
    app_name: App03

And you can have one logstash pipeline with if statement in filter 并且您可以在过滤器中使用if语句创建一个logstash管道

filter {

    if [fields][app_name] == "App01" {

      grok { }

    } else if [fields][app_name] == "App02" {

      grok { }

    } else {

      grok { }

    }
}

Condtion can be also if "App02" in [tags] or if [source]=="C:\\App01_Logs\\log.txt" as we send from filebeat if "App02" in [tags]我们从filebeat发送的if "App02" in [tags]或者if [source]=="C:\\App01_Logs\\log.txt"中的条件也可以

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM