简体   繁体   中英

Kafka vs filebeat for shippong logs to logstash

I am currently setting up the central logging system (using ELK) which is estimated to get log data from 100 of micro services and could expand more. Requirement is to have minimum latency and highly available solution Right now I am stuck on how design should look like. While studying over internet, I got the below approach as widely used for such requirements

Microservice -> filebeat -> kafka -> logstash -> ElasticSearch -> Kibana

However, I am struggling to understand if filebeat is really useful in this case. What if I directly stream logs to Kafka which then ships it to logstash? This will help me to overcome the maintenance of log files and also there will be one component less to monitor and maintain. I see an advantage of using kafka over filebeat that it can act as a buffer in conditions if the data being shipped is very high in volume or when the ES cluster is unreachable. Source: https://www.elastic.co/blog/just-enough-kafka-for-the-elastic-stack-part1

I want to understand if there is any real benefit of having filebeat that I am unable to realise.

Filebeat can be installed on each of your servers or nodes. Filebeat collects and quickly sends logs. It is very fast and lightweight, written in go.

In your case, the advantage is that you don't have to spend time developing the same functionality for collecting and sending logs. You just use and configure the Filebeat for your logging architecture. This is very convenient.

Another description of Filebeat is available at the link .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM