[英]Objects name issue in Sinks using "System and workload logging and monitoring" (GKE)
Currently, I am using a GKE cluster 1.14.10-gke.50 version.目前,我使用的是 GKE 集群 1.14.10-gke.50 版本。 This cluster is using “Legacy logging and monitoring”.
该集群正在使用“Legacy logging and monitoring”。 In addition, I have two sinks: A Bucket and a BigQuery Dataset.
此外,我还有两个接收器:一个 Bucket 和一个 BigQuery Dataset。 My concern is that according to the Google documentation, this logging implementation will be decommissioned in March, 2021. I did a test upgrading the logging implementation to “System and workload logging and monitoring”, however I have noticed that the folder “structure” in the Bucket is being mess up, instead of using the container names as “folder” (Please keep in mind that the Buckets do not use real folders), all the log entries are forward into the “stdout” folder.
我担心的是,根据谷歌文档,这个日志记录实现将于 2021 年 3 月停用。我做了一个测试,将日志记录实现升级到“系统和工作负载日志记录和监控”,但是我注意到文件夹“结构”在Bucket 被弄乱了,而不是使用容器名称作为“文件夹”(请记住 Buckets 不使用真正的文件夹),所有日志条目都转发到“stdout”文件夹中。 Regarding the BigQuery dataset, the tables previously were named with the container name, but with the new implementation stdout table is created.
关于 BigQuery 数据集,以前的表是用容器名称命名的,但随着新的实现标准输出表的创建。 I would want keeping the old structure, this means using the container names to name the objects to be created, these are my reasons: It is clearer because you can filter easily and I am using some scripts to check the archived log entries and I want to avoid a refactor.
我想保留旧的结构,这意味着使用容器名称来命名要创建的对象,这些是我的原因:它更清晰,因为您可以轻松过滤并且我正在使用一些脚本来检查存档的日志条目,我想要以避免重构。 According the Google documentation this is the normal behavior of the new “System and workload logging and monitoring”.
根据谷歌文档,这是新“系统和工作负载日志记录和监控”的正常行为。 Is there any solution?,
有什么解决办法吗?,
If you want to keep tidy your bucket, the best way is to create a new bucket, and have the old one serve as a reservoir of the old logs.如果你想让你的桶保持整洁,最好的方法是创建一个新桶,让旧桶作为旧日志的存储库。
Regarding the new structure, is the way that the new workload logging and monitoring schema.关于新结构,是新工作负载日志记录和监控模式的方式。 If you process the logs afterwards, to discriminate them you will need to group the lines by the following attributes:
如果您之后处理日志,要区分它们,您需要按以下属性对行进行分组:
"resource":{
"labels":{
"cluster_name":"cluster-1",
"container_name":"workload-1",
...
}
However, there is no built-in solution to avoid this new logging schema.但是,没有内置解决方案可以避免这种新的日志记录模式。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.