[英]How do I send logs to GELF UDP endpoint from Kubernetes on a per-pod basis
I've recently started using kubernetes and am now looking at how I can configure centralised logging. 我最近开始使用kubernetes,现在正在研究如何配置集中式日志记录。 For the majority of the pods, the application itself logs straight to the GELF endpoint (logstash), however there are a number of "management" pods which I need to get the logs from too.
对于大多数pod,应用程序本身直接登录到GELF端点(logstash),但是我还需要一些“管理”pod来获取日志。
Previously when I was using Docker Swarm I would simply add the log driver (and relevant configuration) into the compose file . 以前当我使用Docker Swarm时,我只需将日志驱动程序(和相关配置)添加到compose文件中 。 However there doesn't seem to be that option in Kubernetes.
然而,Kubernetes似乎没有那种选择。
I looked at using Fluentd to read the logs straight from /var/log/containers, but I ran into a couple of issues here: 我看着使用Fluentd直接从/ var / log / containers读取日志,但我遇到了几个问题:
There doesn't seem to be any easy way to specify which pods to log to logstash; 似乎没有任何简单的方法来指定哪些pod要记录到logstash; I get that you can create filters etc but this doesn't seem very maintainable going forward, something using annotations on the individual pods seems more sensible.
我知道你可以创建过滤器等但是这在未来似乎不太可维护,使用单个pod上的注释的东西似乎更合理。
The logs in /var/log/containers are in the json-file log format, not GELF. / var / log / containers中的日志采用json文件日志格式,而不是GELF。
Is there any way in kubernetes to use the built in Docker logging driver on a per-pod basis to easily log to the GELF endpoint? 在kubernetes中是否有任何方法可以基于每个pod使用内置的Docker日志记录驱动程序来轻松登录到GELF端点?
Try to use fluentd with the Kubernetes metadata plugin to extract local json-file docker logs and send to Graylog2. 尝试使用流利的Kubernetes元数据插件来提取本地json文件docker日志并发送到Graylog2。
tag_to_kubernetes_name_regexp
- the regular expression used to extract Kubernetes metadata (pod name, container name, namespace) from the current fluentd tag. tag_to_kubernetes_name_regexp
- 用于从当前fluentd标记中提取Kubernetes元数据(pod名称,容器名称,名称空间)的正则表达式。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.