简体   繁体   English

Kubernetes日志记录架构-NFS持久卷?

[英]Kubernetes logging architecture - NFS Persistent volume?

In kubernetes cluster for microservice application to diagnose the issue we need logs. 在用于微服务应用程序的kubernetes集群中,我们需要日志来诊断问题。

  1. Is is good idea to use NFS persistent volume for all microservice logs? 将NFS持久卷用于所有微服务日志是一个好主意吗?
  2. If yes, Is it possible to apply log rotation policy on NFS persistent volume based on size or days? 如果是,是否可以基于大小或天数在NFS持久卷上应用日志轮换策略?
  3. If we use ELK stack with filebeat it will need more resources and learning for customer to get the required log. 如果我们将ELK堆栈与filebeat一起使用,它将需要更多资源并需要客户学习以获取所需的日志。

What will be best approach ie NFS or ELK stack or mixed? 什么是最佳方法,即NFS或ELK堆栈或混合?

  1. NFS is ok as long as it is able to offer required performance. 只要能够提供所需的性能,NFS就可以。
  2. You should apply lifecycle policy at Elasticsearch indices level. 您应该在Elasticsearch索引级别应用生命周期策略。 Modern Kibana has a nice interface for creation of lifecycle policies and overall monitoring of ES. Modern Kibana有一个不错的界面,可用于创建生命周期策略和对ES进行整体监视。
  3. Never worked with Filebeat. 从未使用Filebeat。 We use EFK stack - Elasticsearch, Fluentd and Kibana. 我们使用EFK堆栈-Elasticsearch,Fluentd和Kibana。 It works pretty well and is installed only using Helm Charts. 它运行良好,仅使用Helm Charts安装。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM