简体   繁体   中英

Why should I use Filebeat as daemonset on eks if logs are stored in host?

Almost all of the reference I found over web says filebeat should be used as daemonset or sidecar in k8.

What I observed in my cluster that Eks pod logs are already getting saved under host directory (/var/log/container) then why shouldn't I use Filebeat as a normal process on host and collectblog from host path? Also in case if node scale then userdata can be in place to start/configure filebeat. Any problem I will face with this?

I am afraid to follow this without daemonset as I do not find any similar solution and unaware about the upcoming limitations.

Deploying Filbeat as a daemonset gives you an advantage of being able to deploy it using k8s manifests as opposed to custom deployment via userdata or configuration management. This is basically the whole point of picking up Kube.netes in the first place.

Configuring another process running directly on the host has the following disadvantages -

  1. Deployment is custom and doesn't follow the same standards of everything else you deploy to the cluster.
  2. The process itself is not containerized, making it more brittle and prone to error.
  3. Some kube.netes capabilities will not be available to it - such as exposing metrics to prometheus, joining a service mesh, etc.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM