简体   繁体   English

使用 Elastic Common Schema 配置基于 Filebeat 提示的自动发现

[英]Configure Filebeat hints-based Autodiscover with Elastic Common Schema

I'm can't find any documentation on how to configure filebeat to handle ECS formatted JSON logs.我找不到任何关于如何配置 filebeat 以处理 ECS 格式的 JSON 日志的文档。

I'm using ecs-pino-format to output "ECS" logs and here is a typical log I output:我将ecs-pino-format用于 output“ECS”日志,这是一个典型的日志 I output:

{"log":{"level":"debug","logger":"pino"},"@timestamp":"2020-06-10T17:02:11.266Z","module":"APM","ecs":{"version":"1.5.0"},"message":"ended transaction {\"trans\":\"7614bf8a4895a7a4\",\"trace\":\"8a5c71d2c1c63f6dfc1a5bfd046701ed\",\"type\":\"request\",\"result\":\"HTTP 2xx\",\"name\":\"GET /healthcheck\"}"}

Here is my filebeat configuration:这是我的 filebeat 配置:

filebeat.autodiscover:
      providers:
        - type: kubernetes
          node: ${NODE_NAME}
          hints.enabled: true
          hints.default_config:
            type: container
            paths:
              - /var/log/containers/*${data.kubernetes.container.id}.log

With this config, my logs are not interpreted by Kibana:使用此配置,我的日志不会被 Kibana 解释: 在此处输入图像描述

I add this annotation to my pod (not even sure I must do this...):我将这个注释添加到我的 pod(甚至不确定我必须这样做......):

co.elastic.logs/json.keys_under_root: true

This is the error I have on filebeat:这是我在 filebeat 上遇到的错误:

2020-06-10T16:47:00.773Z    WARN    [elasticsearch]    elasticsearch/client.go:384    Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0x304e23a, ext:63727404418, loc:(*time.Location)(nil)}, Meta:null, Fields:{"agent":{"ephemeral_id":"cc9f9def-5d67-4592-8459-f556f8f2fc29","hostname":"filebeat-filebeat-4dqpq","id":"e8d9cffe-ceca-49f5-ae31-65bbb29353e8","type":"filebeat","version":"7.7.0"},"ecs":{"version":"1.5.0"},"host":{"name":"filebeat-filebeat-4dqpq"},"input":{"type":"container"},"json":{"@timestamp":"2020-06-10T16:46:58.049Z","ecs":{"version":"1.5.0"},"log":"","message":"sending span {\"span\":\"87ad75b7f0858817\",\"parent\":\"82e1f82870aa3e55\",\"trace\":\"13c7569f7562a72bef1300097d1ab86c\",\"name\":\"SELECT\",\"type\":\"db\"}","module":"APM","trace.id":"13c7569f7562a72bef1300097d1ab86c","transaction.id":"82e1f82870aa3e55"},"kubernetes":{"container":{"image":"registry.gitlab.com/consensys/codefi/products/assets/workflow-api:v0.1.3-2-g358bbc6","name":"generic-app"},"labels":{"app_kubernetes_io/instance":"workflow-api","app_kubernetes_io/name":"workflow-api","pod-template-hash":"b946b7c49"},"namespace":"codefi","node":{"name":"ip-192-168-33-94.eu-west-3.compute.internal"},"pod":{"name":"workflow-api-b946b7c49-7qldb","uid":"e984519d-8cc5-426d-bdac-e3f0dfa55c0b"},"replicaset":{"name":"workflow-api-b946b7c49"}},"log":{"file":{"path":"/var/log/containers/workflow-api-b946b7c49-7qldb_codefi_generic-app-9bff78b56f893e056e1e614de3c28aa6671dd4723c0dfc166460ac9bde43571a.log"},"offset":2303955},"stream":"stdout"}, Private:file.State{Id:"", Finished:false, Fileinfo:(*os.fileStat)(0xc000ac8a90), Source:"/var/log/containers/workflow-api-b946b7c49-7qldb_codefi_generic-app-9bff78b56f893e056e1e614de3c28aa6671dd4723c0dfc166460ac9bde43571a.log", Offset:2304478, Timestamp:time.Time{wall:0xbfb060a48062556d, ext:986606661848, loc:(*time.Location)(0x3bdbf40)}, TTL:-1, Type:"container", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x601c938, Device:0x10301}}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"object mapping for [json.log] tried to parse field [log] as object, but found a concrete value"

If I remove the annotation, I don't see any error, so I guess it comes from this configuration.如果我删除注释,我看不到任何错误,所以我猜它来自这个配置。

Did I miss some docs here?我在这里错过了一些文档吗? Thanks for your help.谢谢你的帮助。

Found it.找到了。 For future reference, when using ECS log format and hints-based autodiscover, simply add these annotations to your pods:为了将来参考,当使用 ECS 日志格式和基于提示的自动发现时,只需将这些注释添加到您的 pod:

co.elastic.logs/json.keys_under_root: true
co.elastic.logs/json.message_key: message
co.elastic.logs/json.overwrite_keys: true

I hope this can help others !我希望这可以帮助别人!

I had the same problem in 2022, the annotations above didn't work, I found a blog talking about logging in elastic using docker that inspired me, and this is what I did:我在 2022 年遇到了同样的问题,上面的注释不起作用,我找到了一篇关于使用 docker 登录弹性的博客,这启发了我,这就是我所做的:

filebeat.autodiscover:
 providers:
   - type: kubernetes
     node: ${NODE_NAME}
     hints.enabled: true
     hints.default_config:
       type: container
       paths:
         - /var/log/containers/*${data.kubernetes.container.id}.log


processors:
  # - add_cloud_metadata:
  # - add_host_metadata:
  - decode_json_fields:
      fields: ["message"]
      process_array: false
      max_depth: 3
      target: ""
      overwrite_keys: true
      add_error_key: true
      expand_keys: true
#...

I added the processor above to extract fields from the "message" field.我在上面添加了处理器以从“消息”字段中提取字段。 Hope it will help someone!希望它会帮助别人!

I have used below configuration providers: - hints.default_config: paths: - '/var/log/containers/*-${data.container.id}.log' type: container hints.enabled: true host: '${HOSTNAME}'我使用了以下配置提供程序:- hints.default_config: paths: - '/var/log/containers/*-${data.container.id}.log' type: container hints.enabled: true host: '${HOSTNAME }'

and annotation co.elastic.logs/fileset= access和注释 co.elastic.logs/fileset= access

to get all container/pods logs to see in elastic search获取所有容器/pods 日志以在弹性搜索中查看

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM