简体   繁体   中英

Sending logs to elastic search using logstash

i have single node ELK set up in 10.xx1 where i have installed logstash, elastic search and kibana.

i have my application running in another server 10.xx2 and i want my logs to be forwarded to elastic search.

My log file /var/log/myapp/myapp.log in 10.xx2

In 10.xx1 i provided this input in /etc/logstash/conf.d

input {
  file {
    path => "/var/log/myapp/myapp.log"
    type => "syslog"
  }
}

output {
   elasticsearch {
       hosts => ["10.252.30.11:9200"]
       index => "versa"
   }
}

My questions are as below

  1. Do i need to install logstash in 10.xx2
  2. How can i grep only for the lines having "Error"
  3. Everyday my app produces a log of size 10MB. i just want to know, if i can add one more node to my elastic search so that the space wont fill up.
  4. i dont want to keep my logs permanently in elastic search . Is there any way i can set an expiry time for the logs that am sending ? ie delete the logs after 7 days .

I can answer 1 and 2.

  • You need to install at least one of Logstash (not recommend) or Filebeat or Packetbeat on 10.xx2. Filebeat or Packetbeat are both good and free from the Elastic.co company. Packetbeat is used to capture app logs via network, not log files. For your case, using a file log, just use Filebeat.
  • You need to edit the Filebeat configuration files (filebeat .yml) to shoot its logs to 10.xx1

filebeat: prospectors: - paths: - /var/log/myapp/myapp.log

And

logstash: hosts: ["10.xx1:5044"]

  • On 10.xx1, where you have installed Logstash (and others to make a ELK), you need to create some configuration files for Logstash:

    • Add a input file named 02-beats-input.conf into /etc/logstash/conf.d/

    input { beats { port => 5044 ssl => false } }

    • Add a filter file named 03-myapp-filter.conf into /etc/logstash/conf.d/ . You should find a filter pattern to match your log.

For 2:

Kibana act as a web interface to Elasticsearch. Once it is started, by default it will be available on port 5601. You can then use the discovery interface to search for terms, like "Error". It will return the first 500 document with this term.

For 3:

Another Elasticsearch will allow to spread your data between nodes. But a node can easily deal with a few gigas without problem.

For 4:

You can't set an expiry date to the data. At least it would not be automatic, you'll have to search all the logs expiring today and deleting them.
Another solution (and a better one) is to have one index per day (with index => "versa-%{+YYYY.MM.dd}" ) and delete the index after 7 days (easily done with elasticsearch curator and a cron job)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM