简体   繁体   English

如何将Kubernetes日志发送到AWS CloudWatch?

[英]How to Send Kubernetes Logs to AWS CloudWatch?

AWS CloudWatch Logs in Docker AWS CloudWatch登录Docker

Setting an AWS CloudWatch Logs driver in docker is done with log-driver=awslogs and log-opt , for example - 设置在AWS CloudWatch的日志驱动器docker用做log-driver=awslogslog-opt ,例如-

#!/bin/bash

docker run \
    --log-driver=awslogs \
    --log-opt awslogs-region=eu-central-1 \
    --log-opt awslogs-group=whatever-group \
    --log-opt awslogs-stream=whatever-stream \
    --log-opt awslogs-create-group=true \
    wernight/funbox \
        fortune

My Problem 我的问题

I would like to use AWS CloudWatch logs in a Kubernetes cluster, where each pod contains a few Docker containers. 我想在Kubernetes集群中使用AWS CloudWatch日志,其中每个pod包含一些Docker容器。 Each deployment would have a separate Log Group, and each container would have a separate stream. 每个部署都有一个单独的日志组,每个容器都有一个单独的流。 I could not find a way to send the logging parameters to the docker containers via Kubernetes create / apply . 我找不到通过Kubernetes create / apply将日志记录参数发送到docker容器的方法。

My Question 我的问题

How can I send the log-driver and log-opt parameters to a Docker container in a pod / deployment? 如何将log-driverlog-opt参数发送到pod / deployment中的Docker容器?

What have I tried 我试过了什么

From what I understand, Kubernetes prefer Cluster-level logging to Docker logging driver. 根据我的理解,Kubernetes更喜欢群集级日志记录到Docker日志记录驱动程序。

We could use fluentd to collect, transform, and push container logs to CloudWatch Logs. 我们可以使用流利的方法来收集,转换容器日志并将其推送到CloudWatch Logs。

All you need is to create a fluentd DaemonSet with ConfigMap and Secret. 您只需要使用ConfigMap和Secret创建一个流畅的DaemonSet。 Files can be found in Github . 文件可以在Github中找到。 It has been tested with Kubernetes v1.7.5. 它已经过Kubernetes v1.7.5的测试。

The following are some explains. 以下是一些解释。

In

With DaemonSet, fluentd collect every container logs from the host folder /var/lib/docker/containers . 使用DaemonSet,流利地从主机文件夹/var/lib/docker/containers docker / containers收集每个容器日志。

Filter 过滤

fluent-plugin-kubernetes_metadata_filter plugin load the pod's metadata from Kubernetes API server. fluent-plugin-kubernetes_metadata_filter插件从Kubernetes API服务器加载pod的元数据。

The log record would be like this. 日志记录就是这样的。

{
    "log": "INFO: 2017/10/02 06:44:13.214543 Discovered remote MAC 62:a1:3d:f6:eb:65 at 62:a1:3d:f6:eb:65(kube-235)\n",
    "stream": "stderr",
    "docker": {
        "container_id": "5b15e87886a7ca5f7ebc73a15aa9091c9c0f880ee2974515749e16710367462c"
    },
    "kubernetes": {
        "container_name": "weave",
        "namespace_name": "kube-system",
        "pod_name": "weave-net-4n4kc",
        "pod_id": "ac4bdfc1-9dc0-11e7-8b62-005056b549b6",
        "labels": {
            "controller-revision-hash": "2720543195",
            "name": "weave-net",
            "pod-template-generation": "1"
        },
        "host": "kube-234",
        "master_url": "https://10.96.0.1:443/api"
    }
}

Make some tags with Fluentd record_transformer filter plugin. 使一些标签与Fluentd record_transformer过滤器插件。

{
    "log": "...",
    "stream": "stderr",
    "docker": {
        ...
    },
    "kubernetes": {
        ...
    },
    "pod_name": "weave-net-4n4kc",
    "container_name": "weave"
}

Out 退房

fluent-plugin-cloudwatch-logs plugin send to AWS CloudWatch Logs. fluent-plugin-cloudwatch-logs插件发送到AWS CloudWatch Logs。

With log_group_name_key and log_stream_name_key configuration, log group and stream name can be any field of the record. 使用log_group_name_keylog_stream_name_key配置,日志组和流名称可以是记录的任何字段。

<match kubernetes.**>
  @type cloudwatch_logs
  log_group_name_key pod_name
  log_stream_name_key container_name
  auto_create_stream true
  put_log_events_retry_limit 20
</match>

As per kubernate, Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster and kubernate cluster-level-logging-architectures . 根据kubernate,Kubernetes不提供日志数据的本机存储解决方案,但您可以将许多现有的日志记录解决方案集成到Kubernetes集群和kubernate 集群级日志记录体系结构中

Kubernetes doesn't specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: Stackdriver Logging for use with Google Cloud Platform, and Elasticsearch. Kubernetes没有指定日志代理,但是两个可选的日志代理与Kubernetes版本一起打包:Stackdriver Logging用于Google Cloud Platform和Elasticsearch。 You can find more information and instructions in the dedicated documents. 您可以在专用文档中找到更多信息和说明。 Both use fluentd with custom configuration as an agent on the node. 两者都使用流利的自定义配置作为节点上的代理。

Fluentd image to send Kubernetes logs to CloudWatch too, so you can use that to Deploy , 流畅的图像也可以将Kubernetes日志发送到CloudWatch,因此您可以使用它来部署

You could use a Helm chart to install Fluentd: 您可以使用Helm图表来安装Fluentd:

$ helm install --name my-release incubator/fluentd-cloudwatch

This is from: https://github.com/kubernetes/charts/tree/master/incubator/fluentd-cloudwatch 这来自: https//github.com/kubernetes/charts/tree/master/incubator/fluentd-cloudwatch

Sliverfox has a great answer. Sliverfox有一个很好的答案。 You don't have to build your own image. 您不必构建自己的图像。 Could also directly use fluentd official docker image, fluent/fluentd-kubernetes-daemonset:cloudwatch. 也可以直接使用流利的官方码头图片,流利/流利-kubernetes-daemonset:cloudwatch。 The code is on fluentd-kubernetes-daemonset github . 代码在fluentd-kubernetes-daemonset github上

You could replace the default fluent.conf with the configmap. 您可以使用configmap替换默认的fluent.conf。 Like below in the ds.yaml, and write your own fluent.conf in configmap.yaml. 如下所示在ds.yaml中,并在configmap.yaml中编写自己的fluent.conf。 For the complete yaml files, you could refer to the example ds.yaml and configmap.yaml that we wrote. 对于完整的yaml文件,您可以参考我们编写的示例ds.yamlconfigmap.yaml

    volumeMounts:
    - name: varlog
      mountPath: /var/log
    - name: varlibdockercontainers
      mountPath: /var/lib/docker/containers
      readOnly: true
    - name: config-volume
      mountPath: /fluentd/etc/
  volumes:
  - name: varlog
    hostPath:
      path: /var/log
  - name: varlibdockercontainers
    hostPath:
      path: /var/lib/docker/containers
  - name: config-volume
    configMap:
      name: fluentd-cw-config

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Kubernetes HPA:将 HPA 日志作为事件发送到 aws cloudwatch - Kubernetes HPA : Send HPA logs as events to aws cloudwatch AWS Cloudformation 如何将日志从 CloudWatch Logs 发送到 S3 - AWS Cloudformation How to send logs from CloudWatch Logs to S3 如何自动将AWS CloudWatch日志发送到AWS Redshift - How to send AWS cloudwatch logs to AWS redshift automatically 有多少账户可以将 cloudwatch 事件发送到 AWS 日志目的地? - How many accounts could send cloudwatch events to AWS logs destination? AWS EKS 日志记录到 CloudWatch - 如何仅发送日志而不发送指标? - AWS EKS logging to CloudWatch - how to send logs only, without metrics? 如何将日志从本地服务器发送到 AWS Cloudwatch? - How to send logs from on-premise servers to AWS Cloudwatch? 如何在AWS CloudWatch中轮换日志? - How to rotate logs in AWS CloudWatch? 如何使用 AWS CloudWatch Insights 查询 AWS CloudWatch 日志? - How to query AWS CloudWatch logs using AWS CloudWatch Insights? Fluent-Bit 将 kubernetes 日志发送到 AWS Cloudwatch。 如何根据 kubernetes 命名空间更改日志 stream 前缀? - Fluent-Bit sending kubernetes logs to AWS Cloudwatch. How to change log stream prefix based on kubernetes namespace? 如何使用java获取Aws cloudwatch日志 - How to get the Aws cloudwatch logs using java
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM