[英]Collect kubernetes' pods logs
I'm trying to collect my application containers' logs throughout their entire life cycle.我正在尝试在整个生命周期中收集我的应用程序容器的日志。 These containers are running inside Kubernetes pods, I've found solutions like Fluentd but I also found out that I need to specify a backend (Elasticsearch, AWS S3, etc.) whereas I want to collect logs inside files having specific names, for example podname_namespace_containername.json and then parse those files using a script.
这些容器在 Kubernetes pod 中运行,我找到了 Fluentd 之类的解决方案,但我也发现我需要指定后端(Elasticsearch、AWS S3 等),而我想收集具有特定名称的文件中的日志,例如podname_namespace_containername.json 然后使用脚本解析这些文件。 Is this possible with fluentd?
这可以用 fluentd 实现吗?
By far the fastest way to setup log collection is https://github.com/helm/charts/tree/master/stable/fluent-bit .到目前为止,设置日志收集的最快方法是https://github.com/helm/charts/tree/master/stable/fluent-bit 。 Refer
values.yaml
for all the options available.有关所有可用选项,请参阅
values.yaml
。 It supports multiple backends like ES, S3, Kafka.它支持多个后端,如 ES、S3、Kafka。 Every log event is enriched with pod metadata (pod name, namespace, etc) and tagged so that you can organize processing separately on a backend.
每个日志事件都包含 pod 元数据(pod 名称、命名空间等)并进行了标记,以便您可以在后端单独组织处理。 Eg on a backend you can select and parse only certain pods in certain namespaces.
例如,在后端,您只能选择和解析某些命名空间中的某些 pod。
According to https://kubernetes.io/docs/concepts/cluster-administration/logging/ you log to stdout/stderr, it gets written to the underlying node, a log collector (daemonset) collects everything and sends further.根据https://kubernetes.io/docs/concepts/cluster-administration/logging/您登录到 stdout/stderr,它被写入底层节点,日志收集器(daemonset)收集所有内容并进一步发送。
FluentBit daemonset in Kubernetes implements exactly this architecture. Kubernetes 中的 FluentBit daemonset 正是实现了这种架构。 More docs on FluentBit: https://docs.fluentbit.io/manual/concepts/data-pipeline
有关 FluentBit 的更多文档: https ://docs.fluentbit.io/manual/concepts/data-pipeline
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.