简体   繁体   English

EKS - Fluent-bit,CloudWatch 无法从日志条目中删除 Kube.netes 数据

[英]EKS - Fluent-bit, to CloudWatch unable to remove Kubernetes data from log entries

We have configured Fluent-bit to send the logs from our cluster directly to CloudWatch.我们已将 Fluent-bit 配置为将日志从我们的集群直接发送到 CloudWatch。 We have enabled the Kube.netes filter in order to set our log_stream_name as $(kube.netes['container_name']).我们启用了 Kube.netes 过滤器,以便将我们的 log_stream_name 设置为 $(kube.netes['container_name'])。

However, the logs are terrible.但是,日志很糟糕。

Each CloudWatch line looks like this:每条 CloudWatch 行如下所示:

    2022-06-23T14:17:34.879+02:00   {"kubernetes":{"redacted_redacted":"145236632541.lfl.ecr.region-#.amazonaws.com/redacted@sha256:59392fab7hsfghsfghsfghsfghsfghsfghc39c1bee75c0b4bfc2d9f4a405aef449b25","redacted_image":"145236632541.lfl.ecr.region-#.amazonaws.com/redacted:ve3b56a45","redacted_name":"redacted-redacted","docker_id":"b431f9788f46sd5f4ds65f4sd56f4sd65f4d336fff4ca8030a216ecb9e0a","host":"ip-0.0.0.0.region-#.compute.internal","namespace_name":"namespace","pod_id":"podpodpod-296c-podpod-8954-podpodpod","pod_name":"redacted-redacted-redacted-7dcbfd4969-mb5f5"},
    2022-06-23T14:17:34.879+02:00   {"kubernetes":{"redacted_redacted":"145236632541.lfl.ecr.region-#.amazonaws.com/redacted@sha256:59392fab7hsfghsfghsfghsfghsfghsfghc39c1bee75c0b4bfc2d9f4a405aef449b25","redacted_image":"145236632541.lfl.ecr.region-#.amazonaws.com/redacted:ve3b56a45","redacted_name":"redacted-redacted","docker_id":"b431f9788f46sd5f4ds65f4sd56f4sd65f4d336fff4ca8030a216ecb9e0a","host":"ip-0.0.0.0.region-#.compute.internal","namespace_name":"namespace","pod_id":"podpodpod-296c-podpod-8954-podpodpod","pod_name":"redacted-redacted-redacted-7dcbfd4969-mb5f5"},
    2022-06-23T14:17:34.879+02:00   {"kubernetes":{"redacted_redacted":"145236632541.lfl.ecr.region-#.amazonaws.com/redacted@sha256:59392fab7hsfghsfghsfghsfghsfghsfghc39c1bee75c0b4bfc2d9f4a405aef449b25","redacted_image":"145236632541.lfl.ecr.region-#.amazonaws.com/redacted:ve3b56a45","redacted_name":"redacted-redacted","docker_id":"b431f9788f46sd5f4ds65f4sd56f4sd65f4d336fff4ca8030a216ecb9e0a","host":"ip-0.0.0.0.region-#.compute.internal","namespace_name":"namespace","pod_id":"podpodpod-296c-podpod-8954-podpodpod","pod_name":"redacted-redacted-redacted-7dcbfd4969-mb5f5"},
    2022-06-23T14:20:07.074+02:00   {"kubernetes":{"redacted_redacted":"145236632541.lfl.ecr.region-#.amazonaws.com/redacted@sha256:59392fab7hsfghsfghsfghsfghsfghsfghc39c1bee75c0b4bfc2d9f4a405aef449b25","redacted_image":"145236632541.lfl.ecr.region-#.amazonaws.com/redacted:ve3b56a45","redacted_name":"redacted-redacted","docker_id":"b431f9788f46sd5f4ds65f4sd56f4sd65f4d336fff4ca8030a216ecb9e0a","host":"ip-0.0.0.0.region-#.compute.internal","namespace_name":"namespace","pod_id":"podpodpod-296c-podpod-8954-podpodpod","pod_name":"redacted-redacted-redacted-7dcbfd4969-mb5f5"},

Which makes the logs unusable unless expanded, and once expanded the logs look like this:这使得日志无法使用,除非展开,一旦展开,日志如下所示:

2022-06-23T14:21:34.207+02:00
{
    "kubernetes": {
        "container_hash": "145236632541.lfl.ecr.region.amazonaws.com/redacted@sha256:59392fab7hsfghsfghsfghsfghsfghsfghc39c1bee75c0b4bfc2d9f4a405aef449b25",
        "container_image": "145236632541.lfl.ecr.region-#.amazonaws.com/redacted:ve3b56a45",
        "container_name": "redacted-redacted",
        "docker_id": "b431f9788f46sd5f4ds65f4sd56f4sd65f4d336fff4ca8030a216ecb9e0a",
        "host": "ip-0.0.0.0.region-#.compute.internal",
        "namespace_name": "redacted",
        "pod_id": "podpodpod-296c-podpod-8954-podpodpod",
        "pod_name": "redacted-redacted-redacted-7dcbfd4969-mb5f5"
    },
    "log": "[23/06/2022 12:21:34] loglineloglinelogline\ loglineloglinelogline \n",
    "stream": "stdout"
}
    {"kubernetes":{"redacted_redacted":"145236632541.lfl.ecr.region-#.amazonaws.com/redacted@sha256:59392fab7hsfghsfghsfghsfghsfghsfghc39c1bee75c0b4bfc2d9f4a405aef449b25","redacted_image

Which is also a bit horrible because every line is flooded with Kube.netes data.这也有点可怕,因为每一行都充斥着 Kube.netes 数据。 I would like to remove the Kube.netes data from the logs completely, But I would like to keep using $(kube.netes['container_name']) as the log stream name so that the logs are properly named.我想从日志中完全删除 Kube.netes 数据,但我想继续使用 $(kube.netes['container_name']) 作为日志 stream 名称,以便正确命名日志。 I have tried using filters with Remove_key and LUA scripts that would remove the Kube.netes data.我尝试使用带有 Remove_key 和 LUA 脚本的过滤器来删除 Kube.netes 数据。 But as soon as something removes it, the log stream cannot be named $(kube.netes['container_name']).但是一旦删除它,日志 stream 就不能命名为 $(kube.netes['container_name'])。

I have found very little documentation on this.我发现这方面的文档很少。 And have not found a proper way to remove Kube.netes data and to keep my log_stream_name as my container_name.并且还没有找到删除 Kube.netes 数据并将我的 log_stream_name 保留为我的 container_name 的正确方法。

Here is the raw with the fluent bit config that I used: https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluent-bit/fluent-bit-compatible.yaml这是我使用的流利位配置的原始文件: https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset /container-insights-monitoring/fluent-bit/fluent-bit-compatible.yaml

Any help would be appreciated.任何帮助,将不胜感激。

There is an instruction https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-logs-FluentBit.html / (Optional) Reducing the log volume from Fluent Bit有一条指令https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-logs-FluentBit.html /(可选)从 Fluent Bit 减少日志量

Just adding nest filter in the log config.只需在日志配置中添加嵌套过滤器。 Eg例如

user-api.conf: |
[INPUT]
    Name                tail
    Tag                 user-api.*
    Path                /var/log/containers/user-api*.log
    Docker_Mode         On
    Docker_Mode_Flush   5
    Docker_Mode_Parser  container_firstline_user
    Parser              docker
    DB                  /var/fluent-bit/state/flb_user_api.db
    Mem_Buf_Limit       50MB
    Skip_Long_Lines     On
    Refresh_Interval    10
    Rotate_Wait         30
    storage.type        filesystem
    Read_from_Head      ${READ_FROM_HEAD}

[FILTER]
    Name                kubernetes
    Match               user-api.*
    Kube_URL            https://kubernetes.default.svc:443
    Kube_Tag_Prefix     user-api.var.log.containers.
    Merge_Log           On
    Merge_Log_Key       log_processed
    K8S-Logging.Parser  On
    K8S-Logging.Exclude Off
    Labels              Off
    Annotations         Off

[FILTER]
    Name                grep
    Match               user-api.*
    Exclude             log /.*"GET \/ping HTTP\/1.1" 200.*/
    
[FILTER]
    Name                nest
    Match               user-api.*
    Operation           lift
    Nested_under        kubernetes
    Add_prefix          Kube.

[FILTER]
    Name                modify
    Match               user-api.*
    Remove              kubernetes.kubernetes.host
    Remove              Kube.container_hash
    Remove              Kube.container_image
    Remove              Kube.container_name
    Remove              Kube.docker_id
    Remove              Kube.host
    Remove              Kube.pod_id

[FILTER]
    Name                nest
    Match               user-api.*
    Operation           nest
    Wildcard            Kube.*
    Nested_under        kubernetes
    Remove_prefix       Kube.

[OUTPUT]
    Name                cloudwatch_logs
    Match               user-api.*
    region              ${AWS_REGION}
    log_group_name      /aws/containerinsights/${CLUSTER_NAME}/user-api
    log_stream_prefix   app-
    auto_create_group   true
    extra_user_agent    container-insights

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Fluent Bit 不从我的 EKS 自定义应用程序发送日志 - Fluent Bit does not send logs from my EKS custom applications 为什么 EKS 说我的 fluent-bit.conf 无效 - Why does EKS say my fluent-bit.conf is not valid EKS 控制平面组件的许多意外 CloudWatch Log Streams - Many unexpected CloudWatch Log Streams for EKS control plane components AWS EKS Fargate 记录到 AWS Cloudwatch:日志组未创建 - AWS EKS Fargate logging to AWS Cloudwatch: log groups are not creating 将 Kube.netes 从本地迁移到 AWS EKS - Migrate Kubernetes from on-premise to AWS EKS 如何在 AWS CloudWatch 中解析混合文本和 JSON 日志条目以进行日志指标筛选 - How to parse mixed text and JSON log entries in AWS CloudWatch for Log Metric Filter 摄取的 AWS CloudWatch 差异日志数据和日志组的实际大小 - AWS CloudWatch difference log data ingested and actual size of Log Groups 如何从前端使用 AWS CloudWatch Logs 提交简单日志? - How to submit the simple log with AWS CloudWatch Logs from frontend? 来自 Django 应用程序的日志消息未上传到 AWS CloudWatch - Log messages from Django application not uploaded in AWS CloudWatch 如何从日志组指标过滤器创建 CloudWatch 警报 - How to create a CloudWatch alarm from log group metric filter
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM