简体   繁体   English

使用Logstash将日志发送到弹性搜索

[英]Sending logs to elastic search using logstash

i have single node ELK set up in 10.xx1 where i have installed logstash, elastic search and kibana. 我在10.xx1中设置了单节点ELK,其中安装了Logstash,Elastic Search和Kibana。

i have my application running in another server 10.xx2 and i want my logs to be forwarded to elastic search. 我的应用程序在另一台服务器10.xx2中运行,我希望将日志转发到弹性搜索。

My log file /var/log/myapp/myapp.log in 10.xx2 我在10.xx2中的日志文件/var/log/myapp/myapp.log

In 10.xx1 i provided this input in /etc/logstash/conf.d 在10.xx1中,我在/etc/logstash/conf.d中提供了此输入

input {
  file {
    path => "/var/log/myapp/myapp.log"
    type => "syslog"
  }
}

output {
   elasticsearch {
       hosts => ["10.252.30.11:9200"]
       index => "versa"
   }
}

My questions are as below 我的问题如下

  1. Do i need to install logstash in 10.xx2 我是否需要在10.xx2中安装Logstash
  2. How can i grep only for the lines having "Error" 我该如何仅对具有“错误”的行进行grep
  3. Everyday my app produces a log of size 10MB. 我的应用每天都会生成10MB的日志。 i just want to know, if i can add one more node to my elastic search so that the space wont fill up. 我只想知道,是否可以在弹性搜索中添加一个以上的节点,以使空间不会填满。
  4. i dont want to keep my logs permanently in elastic search . 我不想永久保留我的日志以进行弹性搜索。 Is there any way i can set an expiry time for the logs that am sending ? 有什么办法可以设置发送日志的到期时间? ie delete the logs after 7 days . 即7天后删除日志。

I can answer 1 and 2. 我可以回答1和2。

  • You need to install at least one of Logstash (not recommend) or Filebeat or Packetbeat on 10.xx2. 您需要在Logstash上至少安装Logstash (不推荐)或FilebeatPacketbeat之一。 Filebeat or Packetbeat are both good and free from the Elastic.co company. Filebeat或Packetbeat都很不错,而且可以从Elastic.co公司免费获得。 Packetbeat is used to capture app logs via network, not log files. Packetbeat用于通过网络而不是日志文件捕获应用程序日志。 For your case, using a file log, just use Filebeat. 对于您的情况,使用文件日志,只需使用Filebeat。
  • You need to edit the Filebeat configuration files (filebeat .yml) to shoot its logs to 10.xx1 您需要编辑Filebeat配置文件(filebeat .yml)才能其日志记录到10.xx1

filebeat: prospectors: - paths: - /var/log/myapp/myapp.log

And

logstash: hosts: ["10.xx1:5044"]

  • On 10.xx1, where you have installed Logstash (and others to make a ELK), you need to create some configuration files for Logstash: 在安装了Logstash的10.xx1上(以及用于安装ELK的其他文件),您需要为Logstash创建一些配置文件:

    • Add a input file named 02-beats-input.conf into /etc/logstash/conf.d/ 将名为02-beats-input.conf的输入文件添加到/etc/logstash/conf.d/

    input { beats { port => 5044 ssl => false } }

    • Add a filter file named 03-myapp-filter.conf into /etc/logstash/conf.d/ . 将名为03-myapp-filter.conf的过滤器文件添加到/etc/logstash/conf.d/ You should find a filter pattern to match your log. 您应该找到一个匹配您的日志的过滤器模式。

For 2: 对于2:

Kibana act as a web interface to Elasticsearch. Kibana充当Elasticsearch的Web界面。 Once it is started, by default it will be available on port 5601. You can then use the discovery interface to search for terms, like "Error". 一旦启动,默认情况下它将在端口5601上可用。然后,您可以使用发现界面搜索术语,例如“错误”。 It will return the first 500 document with this term. 它将返回该术语的前500个文档。

For 3: 对于3:

Another Elasticsearch will allow to spread your data between nodes. 另一个Elasticsearch将允许您在节点之间传播数据。 But a node can easily deal with a few gigas without problem. 但是,节点可以轻松处理几千兆字节而不会出现问题。

For 4: 对于4:

You can't set an expiry date to the data. 您无法为数据设置有效期限。 At least it would not be automatic, you'll have to search all the logs expiring today and deleting them. 至少它不是自动的,您必须搜索所有今天到期的日志并将其删除。
Another solution (and a better one) is to have one index per day (with index => "versa-%{+YYYY.MM.dd}" ) and delete the index after 7 days (easily done with elasticsearch curator and a cron job) 另一个解决方案(也是更好的解决方案)是每天有一个索引( index => "versa-%{+YYYY.MM.dd}" ),然后在7天后删除索引(使用Elasticsearch策展人和cron轻松完成)工作)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM