简体   繁体   English

将 filebeat 日志发送到 logstash 以使用 docker 元数据进行索引

[英]Ship filebeat logs to logstash to index with docker metadata

Iam trying to index in elastichsearch with the help of filebeat and logstash.我试图在 filebeat 和 logstash 的帮助下在 elastichsearch 中建立索引。 Here is the filebeat.yml :这是 filebeat.yml :

filebeat.inputs:
- type: docker
  combine_partial: true
  containers:
    path: "/usr/share/dockerlogs/data"
    stream: "stdout"
    ids:
      - "*"
  exclude_files: ['\.gz$']
  ignore_older: 10m

processors:
  # decode the log field (sub JSON document) if JSON encoded, then maps it's fields to elasticsearch fields
- decode_json_fields:
    fields: ["log", "message"]
    target: ""
    # overwrite existing target elasticsearch fields while decoding json fields
    overwrite_keys: true
- add_docker_metadata:
    host: "unix:///var/run/docker.sock"

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

# setup filebeat to send output to logstash
output.logstash:
  hosts: ["xxx.xx.xx.xx:5044"]

# Write Filebeat own logs only to file to avoid catching them with itself in docker log files
logging.level: info
logging.to_files: false
logging.to_syslog: false
loggins.metrice.enabled: false
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0644
ssl.verification_mode: none

And here is the logstash.conf:这是logstash.conf:

input
  {
    beats {
      port => 5044
      host => "0.0.0.0"
    }
  }

output
  {
    stdout {
      codec => dots
    }
    elasticsearch {
      hosts => "http://xxx.xx.xx.x:9200"
      index => "%{[docker][container][labels][com][docker][swarm][service][name]}-%{+xxxx.ww}"
    }
  }

Iam trying to index with the docker name so it would be more readable and more clear than the usual pattern we see all the time like "filebeat-xxxxxx.some-date".我正在尝试使用 docker 名称进行索引,因此它比我们一直看到的常见模式(如“filebeat-xxxxxx.some-date”)更具可读性和清晰性。 I tried several things:我尝试了几件事:

- index => "%{[docker][container][labels][com][docker][swarm][service][name]}-%{+xxxx.ww}"
- index => "%{[docker][container][labels][com][docker][swarm][service][name]}-%{+YYYY.MM}"
- index => "%{[docker][swarm][service][name]}-%{+xxxx.ww}"

But nothing worked.但没有任何效果。 What am i doing wrong ?我究竟做错了什么 ? Maybe iam doing something wrong or missing anthing in filebeat.yml file.也许我在 filebeat.yml 文件中做错了什么或遗漏了一些东西。 It could be that too.也可能是这样。 Thanks for any help or any lead.感谢您的任何帮助或任何线索。

Looks like you're unsure of what docker metadata fields are being added.看起来您不确定正在添加哪些 docker 元数据字段。 It might be a good idea to just get successful indexing first with the default index name (ex. "filebeat-xxxxxx.some-date" or whatever) and then view the log events to see the format of your docker metadata fields.首先使用默认索引名称(例如“filebeat-xxxxxx.some-date”或其他名称)成功建立索引可能是个好主意,然后查看日志事件以查看 docker 元数据字段的格式。

I don't have the same setup as you, but for reference, I'm on AWS ECS so the format of my docker fields are:我没有和你一样的设置,但作为参考,我在 AWS ECS 上,所以我的 docker 字段的格式是:

"docker": {
  "container": {
    "name": "",
    "labels": {
      "com": {
        "amazonaws": {
          "ecs": {
            "cluster": "",
            "container-name": "",
            "task-definition-family": "",
            "task-arn": "",
            "task-definition-version": ""
          }
        }
      }
    },
    "image": "",
    "id": ""
  }
}

After seeing the format and fields available, I was able to add a custom "application_name" field using the above.在看到可用的格式和字段后,我能够使用上述内容添加自定义“application_name”字段。 This field is being generated in my input plugin which is redis in my case, but all input plugins should have the add_field option ( https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html#plugins-inputs-beats-add_field ):这个字段是在我的输入插件中生成的,在我的例子中是 redis,但所有输入插件都应该有 add_field 选项( https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats. html#plugins-inputs-beats-add_field ):

input {
  redis {
    host => "***"
    data_type => "list"
    key       => "***"
    codec     => json
    add_field => {
      "application_name" => "%{[docker][container][labels][com][amazonaws][ecs][task-definition-family]}"
    }
  }
}

After getting getting this new custom field, I was able to run specific filters (grok, json, kv, etc) for different "application_name" fields as they had different log formats, but the important part for you is that you could use it in your output to Elasticsearch for index names:获得这个新的自定义字段后,我能够为不同的“application_name”字段运行特定的过滤器(grok、json、kv 等),因为它们具有不同的日志格式,但对您来说重要的部分是您可以在您输出到 Elasticsearch 以获取索引名称:

output {
  elasticsearch {
      user => ***
      password => ***
      hosts => [ "***" ]
      index => "logstash-%{application_name}-%{+YYY.MM.dd}"
  }
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM