简体   繁体   English

使用“系统和工作负载日志记录和监控”(GKE) 的接收器中的对象名称问题

[英]Objects name issue in Sinks using "System and workload logging and monitoring" (GKE)

Currently, I am using a GKE cluster 1.14.10-gke.50 version.目前,我使用的是 GKE 集群 1.14.10-gke.50 版本。 This cluster is using “Legacy logging and monitoring”.该集群正在使用“Legacy logging and monitoring”。 In addition, I have two sinks: A Bucket and a BigQuery Dataset.此外,我还有两个接收器:一个 Bucket 和一个 BigQuery Dataset。 My concern is that according to the Google documentation, this logging implementation will be decommissioned in March, 2021. I did a test upgrading the logging implementation to “System and workload logging and monitoring”, however I have noticed that the folder “structure” in the Bucket is being mess up, instead of using the container names as “folder” (Please keep in mind that the Buckets do not use real folders), all the log entries are forward into the “stdout” folder.我担心的是,根据谷歌文档,这个日志记录实现将于 2021 年 3 月停用。我做了一个测试,将日志记录实现升级到“系统和工作负载日志记录和监控”,但是我注意到文件夹“结构”在Bucket 被弄乱了,而不是使用容器名称作为“文件夹”(请记住 Buckets 不使用真正的文件夹),所有日志条目都转发到“stdout”文件夹中。 Regarding the BigQuery dataset, the tables previously were named with the container name, but with the new implementation stdout table is created.关于 BigQuery 数据集,以前的表是用容器名称命名的,但随着新的实现标准输出表的创建。 I would want keeping the old structure, this means using the container names to name the objects to be created, these are my reasons: It is clearer because you can filter easily and I am using some scripts to check the archived log entries and I want to avoid a refactor.我想保留旧的结构,这意味着使用容器名称来命名要创建的对象,这些是我的原因:它更清晰,因为您可以轻松过滤并且我正在使用一些脚本来检查存档的日志条目,我想要以避免重构。 According the Google documentation this is the normal behavior of the new “System and workload logging and monitoring”.根据谷歌文档,这是新“系统和工作负载日志记录和监控”的正常行为。 Is there any solution?,有什么解决办法吗?,

If you want to keep tidy your bucket, the best way is to create a new bucket, and have the old one serve as a reservoir of the old logs.如果你想让你的桶保持整洁,最好的方法是创建一个新桶,让旧桶作为旧日志的存储库。

Regarding the new structure, is the way that the new workload logging and monitoring schema.关于新结构,是新工作负载日志记录和监控模式的方式。 If you process the logs afterwards, to discriminate them you will need to group the lines by the following attributes:如果您之后处理日志,要区分它们,您需要按以下属性对行进行分组:

"resource":{
  "labels":{
    "cluster_name":"cluster-1",
    "container_name":"workload-1",
...
}

However, there is no built-in solution to avoid this new logging schema.但是,没有内置解决方案可以避免这种新的日志记录模式。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 使用工作负载身份为 GKE 节点池绑定 GCP IAM - GCP IAM Binding for GKE Node Pool using Workload Identity Google Kubernetes Engine Stackdriver 日志/监控在 gke 版本 1.15 中消失了 - Google Kubernetes Engine Stackdriver logging/monitoring is gone at gke version 1.15 如何使用 Terraform 使用来自 Artifact Registry 的 Docker 映像部署 GKE 工作负载? - How do I deploy a GKE Workload with my Docker image from the Artifact Registry using Terraform? 在 Kubernetes 服务帐户中使用 Google 服务帐户密钥文件作为 GKE 工作负载身份的测试环境替代品 - Using a Google service account keyfile in a Kubernetes serviceaccount as a testing environment replacement for GKE workload identity 在 Stackdriver 和 Istio 上监控 GKE - Monitoring GKE on Stackdriver and Istio 来自节点池的 GKE 上的不可调度的 GPU 工作负载 - Unschedulable GPU workload on GKE from node pool 所有命名空间的 GKE Workload Identity 服务帐号 - GKE Workload Identity service account for all namespaces 监视gke上的CPU / mem使用情况 - monitoring CPU/mem usage on gke 在Kubernetes / Google容器引擎(GKE)上使用Stackdriver API进行日志记录 - Logging using Stackdriver API on Kubernetes / Google Container Engine (GKE) 如何在不使用Stackdriver的情况下登录GKE - How to go about logging in GKE without using Stackdriver
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM