简体   繁体   English

在 docker 容器上缩放 filebeat

[英]Scaling filebeat over docker containers

I'm looking for the appropriate way to monitor applicative logs produced by nginx, tomcat, springboot embedded in docker with filebeat and ELK.我正在寻找合适的方法来监视由 nginx、tomcat、嵌入在 docker 中的 springboot 生成的应用日志,并使用 filebeat 和 ELK。

In the container strategy, a container should be used for only one purpose.在容器策略中,一个容器应该只用于一个目的。

One nginx per container and one tomcat per container, meaning we can't have an additional filebeat within a nginx or tomcat container.每个容器一个 nginx,每个容器一个 tomcat,这意味着我们不能在 nginx 或 tomcat 容器中拥有额外的 filebeat。

Over what I have read over Internet, we could have the following setup:根据我在 Internet 上阅读的内容,我们可以进行以下设置:

  • a volume dedicated for storing logs专用于存储日志的卷
  • a nginx container which mount the dedicated logs volume一个挂载专用日志卷的 nginx 容器
  • a tomcat / springboot container which mount the dedicated logs volume挂载专用日志卷的 tomcat / springboot 容器
  • a filebeat container also mounting the dedicated logs volume一个 filebeat 容器也安装了专用的日志卷

This works fine but when it comes to scale out nginx and springboot container, it is a little bit more complex for me.这很好用,但是在扩展 nginx 和 springboot 容器时,对我来说有点复杂。

Which pattern should I use to push my logs using filebeat to logstash if I have the following configuration:如果我有以下配置,我应该使用哪种模式使用 filebeat 将我的日志推送到 logstash:

  • several nginx containers in load balancing with the same configuration (logs configuration is the same: same path)负载均衡中的几个nginx容器配置相同(日志配置相同:相同路径)
  • several springboot rest api containers behing nginx containers with the same configuration (logs configuration is the same:same path)几个springboot rest api容器behing nginx容器具有相同的配置(日志配置是相同的:相同的路径)

Should I create one volume by set of nginx + springboot rest api and add a filebeat container ?我应该通过一组 nginx + springboot rest api 创建一个卷并添加一个 filebeat 容器吗?

Should I create a global log volume shared by all my containers and have a different log filename by container (having the name of the container in the filename of the logs?) and having only one filebeat container ?我是否应该创建一个由我的所有容器共享的全局日志卷,并且每个容器具有不同的日志文件名(在日志的文件名中包含容器的名称?)并且只有一个 filebeat 容器?

In the second proposal, how to scale filebeat ?在第二个提案中,如何扩展 filebeat ?

Is there another way to do that ?还有另一种方法吗?

Many thanks for your help.非常感谢您的帮助。

The easiest thing to do, if you can manage it, is to set each container process to log to its own stdout (you might be able to specify /dev/stdout or /proc/1/fd/1 as a log file).如果您可以管理它,最简单的方法是将每个容器进程设置为记录到其自己的标准输出(您可以将/dev/stdout/proc/1/fd/1为日志文件)。 For example, the Docker Hub nginx Dockerfile specifies例如, Docker Hub nginx Dockerfile指定

RUN ln -sf /dev/stdout /var/log/nginx/access.log \
    && ln -sf /dev/stderr /var/log/nginx/error.log

so the ordinary nginx logs become the container logs.所以普通的nginx日志就变成了容器日志。 Once you do that, you can plug in the filebeat container input to read those logs and process them.完成此操作后,您可以插入filebeat 容器输入以读取这些日志并对其进行处理。 You could also see them from outside the container with docker logs , they are the same logs.您还可以使用docker logs从容器外部看到它们,它们是相同的日志。


What if you have to log to the filesystem?如果您必须登录到文件系统怎么办? Or there are multiple separate log streams you want to be able to collect?或者您希望能够收集多个单独的日志流?

If the number of containers is variable, but you have good control over their configuration, then I'd probably set up a single global log volume as you describe and use the filebeat log input to read every log file in that directory tree.如果容器的数量是可变的,但您可以很好地控制它们的配置,那么我可能会按照您的描述设置一个全局日志卷,并使用filebeat 日志输入来读取该目录树中的每个日志文件。

If the number of containers is fixed, then you can set up a volume per container and mount it in each container's "usual" log storage location.如果容器的数量是固定的,那么您可以为每个容器设置一个卷,并将其安装在每个容器的“通常”日志存储位置。 Then mount all of those directories into the filebeat container.然后将所有这些目录挂载到 filebeat 容器中。 The obvious problem here is that if you do start or stop a container, you'll need to restart the log manager for the added/removed volume.这里明显的问题是,如果您确实启动或停止了容器,则需要为添加/删除的卷重新启动日志管理器。


If you're actually on Kubernetes, there are two more possibilities.如果您实际上在 Kubernetes 上,还有两种可能性。 If you're trying to collect container logs out of the filesystem, you need to run a copy of filebeat on every node;如果您尝试从文件系统中收集容器日志,则需要在每个节点上运行 filebeat 的副本; a DaemonSet can manage this for you. DaemonSet 可以为您管理这些。 A Kubernetes pod can also run multiple containers, so your other option is to set up pods with both an application container and a filebeat "sidecar" container that ships the logs off.一个 Kubernetes pod 也可以运行多个容器,因此您的另一个选择是使用应用程序容器和用于发送日志的 filebeat“sidecar”容器来设置 pod。 Set up the pod with an emptyDir volume to hold the logs, and mount it into both containers.使用emptyDir卷设置 pod 以保存日志,并将其挂载到两个容器中。 A template system like Helm can help you write the pod specifications without repeating the logging sidecar setup over and over.Helm这样的模板系统可以帮助您编写 pod 规范,而无需一遍又一遍地重复记录 sidecar 设置。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM