繁体   English   中英

通过 Google Compute Engine 和 Cloud Logging 代理使用日志严重性

[英]Use logs severity with Google Compute Engine and the Cloud Logging agent

我想将日志严重性与Google Cloud Logging 代理和在 Compute Engine 上运行的 linux (Debian) VM 结合使用。

Compute Engine 实例是 debian-9 n2-standard-4 机器。

我已按照文档安装了 Cloud Logging 代理。

$ curl -sSO https://dl.google.com/cloudagents/add-logging-agent-repo.sh
$ sudo bash add-logging-agent-repo.sh
$ sudo apt-get install google-fluentd
$ sudo apt-get install -y google-fluentd-catch-all-config-structured
$ sudo service google-fluentd start

根据本段,如果日志行是序列化的 JSON object 并且选项 detect_json 设置为 true,我们可以使用日志严重性。

所以我记录了类似下面的内容,但不幸的是我在 GCP 中没有任何严重性。

$ logger '{"severity":"ERROR","message":"This is an error"}'

在此处输入图像描述

但我会期待这样的事情:

在此处输入图像描述

我不介意日志条目的类型是 textPayload 或 jsonPayload。

启用了 detect_json 的文件/etc/google-fluentd/google-fluentd.conf

$ cat /etc/google-fluentd/google-fluentd.conf 
# Master configuration file for google-fluentd

# Include any configuration files in the config.d directory.
#
# An example "catch-all" configuration can be found at
# https://github.com/GoogleCloudPlatform/fluentd-catch-all-config
@include config.d/*.conf

# Prometheus monitoring.
<source>
  @type prometheus
  port 24231
</source>
<source>
  @type prometheus_monitor
</source>

# Do not collect fluentd's own logs to avoid infinite loops.
<match fluent.**>
  @type null
</match>

# Add a unique insertId to each log entry that doesn't already have it.
# This helps guarantee the order and prevent log duplication.
<filter **>
  @type add_insert_ids
</filter>

# Configure all sources to output to Google Cloud Logging
<match **>
  @type google_cloud
  buffer_type file
  buffer_path /var/log/google-fluentd/buffers
  # Set the chunk limit conservatively to avoid exceeding the recommended
  # chunk size of 5MB per write request.
  buffer_chunk_limit 512KB
  # Flush logs every 5 seconds, even if the buffer is not full.
  flush_interval 5s
  # Enforce some limit on the number of retries.
  disable_retry_limit false
  # After 3 retries, a given chunk will be discarded.
  retry_limit 3
  # Wait 10 seconds before the first retry. The wait interval will be doubled on
  # each following retry (20s, 40s...) until it hits the retry limit.
  retry_wait 10
  # Never wait longer than 5 minutes between retries. If the wait interval
  # reaches this limit, the exponentiation stops.
  # Given the default config, this limit should never be reached, but if
  # retry_limit and retry_wait are customized, this limit might take effect.
  max_retry_wait 300
  # Use multiple threads for processing.
  num_threads 8
  # Use the gRPC transport.
  use_grpc true
  # If a request is a mix of valid log entries and invalid ones, ingest the
  # valid ones and drop the invalid ones instead of dropping everything.
  partial_success true
  # Enable monitoring via Prometheus integration.
  enable_monitoring true
  monitoring_type opencensus
  detect_json true
</match>

文件/etc/google-fluentd/config.d/syslog.conf

$ cat /etc/google-fluentd/config.d/syslog.conf
<source>
  @type tail

  # Parse the timestamp, but still collect the entire line as 'message'
  format syslog

  path /var/log/syslog
  pos_file /var/lib/google-fluentd/pos/syslog.pos
  read_from_head true
  tag syslog
</source>

我错过了什么?

注意:我知道glcoud 解决方法,但它并不理想,因为它记录了资源类型“全局”下的所有内容,而不是在我的 VM 中。

Logger 使用 syslog,而 syslog “解析时间戳,但仍将整行收集为‘消息’”

/etc/google-fluentd/config.d/syslog.conf中所述

在您的情况下,您可以通过使用 json 格式的结构化日志文件流式传输结构化日志来使用日志严重性。

这是结果

echo '{"severity":"ERROR","message":"这是一个错误"}' >> /tmp/test-structured-log.log

在此处输入图像描述

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM