繁体   English   中英

将 fluent-bit 与 kubernetes 过滤器和 elasticsearch 输出一起使用时日志条目丢失

[英]Log entries lost while using fluent-bit with kubernetes filter and elasticsearch output

有时我们会发现 ES 中缺少一些日志,而我们可以在 Kubernetes 中看到它们。

只有我能找到的日志中的问题,在 fluent-bit 日志中指出 kubernetes 解析器的问题: [2020/11/22 09:53:18] [debug] [filter:kubernetes:kubernetes.1] could not merge JSON log as requested

一旦我们将 kubernetes 过滤器的“Merge_Log”选项配置为“Off”,问题似乎就会消失(至少在流畅的日志中不再出现“警告/错误”)。 但是当然,我们失去了一个重要的功能,例如实际上拥有“消息”本身以外的字段/值。

除此之外,在 fluent-bit 或 elasticsearch 中没有其他错误/警告消息,这就是我主要怀疑的原因。 日志(信息中的 log_level)填充有:

k --context contexto09 -n logging-system logs -f -l app=fluent-bit --max-log-requests 31 | grep -iv "\[ info\]"
[2020/11/22 19:45:02] [ warn] [engine] failed to flush chunk '1-1606074289.692844263.flb', retry in 25 seconds: task_id=31, input=appstream > output=es.0
[2020/11/22 19:45:02] [ warn] [engine] failed to flush chunk '1-1606074208.938295842.flb', retry in 25 seconds: task_id=67, input=appstream > output=es.0
[2020/11/22 19:45:08] [ warn] [engine] failed to flush chunk '1-1606074298.662911160.flb', retry in 10 seconds: task_id=76, input=appstream > output=es.0
[2020/11/22 19:45:13] [ warn] [engine] failed to flush chunk '1-1606074310.619565119.flb', retry in 9 seconds: task_id=77, input=appstream > output=es.0
[2020/11/22 19:45:13] [ warn] [engine] failed to flush chunk '1-1606073869.655178524.flb', retry in 1164 seconds: task_id=33, input=appstream > output=es.0
[2020/11/22 19:45:18] [ warn] [engine] failed to flush chunk '1-1606074298.662911160.flb', retry in 282 seconds: task_id=76, input=appstream > output=es.0
[2020/11/22 19:45:21] [ warn] [engine] failed to flush chunk '1-1606073620.626120246.flb', retry in 1974 seconds: task_id=8, input=appstream > output=es.0
[2020/11/22 19:45:21] [ warn] [engine] failed to flush chunk '1-1606074050.441691966.flb', retry in 1191 seconds: task_id=51, input=appstream > output=es.0
[2020/11/22 19:45:22] [ warn] [engine] failed to flush chunk '1-1606074310.619565119.flb', retry in 79 seconds: task_id=77, input=appstream > output=es.0
[2020/11/22 19:45:22] [ warn] [engine] failed to flush chunk '1-1606074319.600878876.flb', retry in 6 seconds: task_id=78, input=appstream > output=es.0
[2020/11/22 19:45:09] [ warn] [engine] failed to flush chunk '1-1606073576.849876665.flb', retry in 1091 seconds: task_id=4, input=appstream > output=es.0
[2020/11/22 19:45:12] [ warn] [engine] failed to flush chunk '1-1606074292.958592278.flb', retry in 898 seconds: task_id=141, input=appstream > output=es.0
[2020/11/22 19:45:14] [ warn] [engine] failed to flush chunk '1-1606074302.347198351.flb', retry in 32 seconds: task_id=143, input=appstream > output=es.0
[2020/11/22 19:45:14] [ warn] [engine] failed to flush chunk '1-1606074253.953778140.flb', retry in 933 seconds: task_id=133, input=appstream > output=es.0
[2020/11/22 19:45:16] [ warn] [engine] failed to flush chunk '1-1606074313.923004098.flb', retry in 6 seconds: task_id=144, input=appstream > output=es.0
[2020/11/22 19:45:18] [ warn] [engine] failed to flush chunk '1-1606074022.933436366.flb', retry in 73 seconds: task_id=89, input=appstream > output=es.0
[2020/11/22 19:45:18] [ warn] [engine] failed to flush chunk '1-1606074304.968844730.flb', retry in 82 seconds: task_id=145, input=appstream > output=es.0
[2020/11/22 19:45:19] [ warn] [engine] failed to flush chunk '1-1606074316.958207701.flb', retry in 10 seconds: task_id=146, input=appstream > output=es.0
[2020/11/22 19:45:19] [ warn] [engine] failed to flush chunk '1-1606074283.907428020.flb', retry in 207 seconds: task_id=139, input=appstream > output=es.0
[2020/11/22 19:45:22] [ warn] [engine] failed to flush chunk '1-1606074313.923004098.flb', retry in 49 seconds: task_id=144, input=appstream > output=es.0
[2020/11/22 19:45:24] [ warn] [engine] failed to flush chunk '1-1606074232.931522416.flb', retry in 109 seconds: task_id=129, input=appstream > output=es.0
...
...
[2020/11/22 19:46:31] [ warn] [engine] chunk '1-1606074022.933436366.flb' cannot be retried: task_id=89, input=appstream > output=es.0

如果我为 log_level 启用“调试”,那么我确实会看到这些1. [2020/11/22 09:53:18] [debug] [filter:kubernetes:kubernetes.1] could not merge JSON log as requested我的1. [2020/11/22 09:53:18] [debug] [filter:kubernetes:kubernetes.1] could not merge JSON log as requested是块无法刷新的原因,因为当所有“merge_log”都关闭时,我没有刷新块错误。

我目前的 fluent-bit 配置是这样的:

kind: ConfigMap
metadata:
  labels:
    app: fluent-bit
    app.kubernetes.io/instance: cluster-logging
    chart: fluent-bit-2.8.6
    heritage: Tiller
    release: cluster-logging
  name: config
  namespace: logging-system
apiVersion: v1
data:
  fluent-bit-input.conf: |
    [INPUT]
        Name             tail
        Path             /var/log/containers/*.log
        Exclude_Path     /var/log/containers/cluster-logging-*.log,/var/log/containers/elasticsearch-data-*.log,/var/log/containers/kube-apiserver-*.log
        Parser           docker
        Tag              kube.*
        Refresh_Interval 5
        Mem_Buf_Limit    15MB
        Skip_Long_Lines  On
        Ignore_Older     7d
        DB               /tail-db/tail-containers-state.db
        DB.Sync          Normal
    [INPUT]
        Name            systemd
        Path            /var/log/journal/
        Tag             host.*
        Max_Entries     1000
        Read_From_Tail  true
        Strip_Underscores  true
    [INPUT]
        Name             tail
        Path             /var/log/containers/kube-apiserver-*.log
        Parser           docker
        Tag              kube-apiserver.*
        Refresh_Interval 5 
        Mem_Buf_Limit    5MB
        Skip_Long_Lines  On
        Ignore_Older     7d
        DB               /tail-db/tail-kube-apiserver-containers-state.db
        DB.Sync          Normal

  fluent-bit-filter.conf: |
    [FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_Tag_Prefix     kube.var.log.containers.
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        K8S-Logging.Parser  On
        K8S-Logging.Exclude On
        Merge_Log           On
        Keep_Log            Off
        Annotations         Off
    [FILTER]
        Name                kubernetes
        Match               kube-apiserver.*
        Kube_Tag_Prefix     kube-apiserver.var.log.containers.
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        K8S-Logging.Parser  Off
        K8S-Logging.Exclude Off
        Merge_Log           Off
        Keep_Log            On
        Annotations         Off

  fluent-bit-output.conf: |
    [OUTPUT]
        Name  es
        Match logs
        Host  elasticsearch-data
        Port  9200
        Logstash_Format On
        Retry_Limit 5
        Type  flb_type
        Time_Key @timestamp
        Replace_Dots On
        Logstash_Prefix logs
        Logstash_Prefix_Key index
        Generate_ID On
        Buffer_Size 2MB
        Trace_Output Off
    [OUTPUT]
        Name  es
        Match sys
        Host  elasticsearch-data
        Port  9200
        Logstash_Format On
        Retry_Limit 5
        Type  flb_type
        Time_Key @timestamp
        Replace_Dots On
        Logstash_Prefix sys-logs
        Generate_ID On
        Buffer_Size 2MB
        Trace_Output Off
    [OUTPUT]
        Name  es
        Match host.*
        Host  elasticsearch-data
        Port  9200
        Logstash_Format On
        Retry_Limit 10
        Type  flb_type
        Time_Key @timestamp
        Replace_Dots On
        Logstash_Prefix host-logs
        Generate_ID On
        Buffer_Size 2MB
        Trace_Output Off
    [OUTPUT]
        Name  es
        Match kube-apiserver.*
        Host  elasticsearch-data
        Port  9200
        Logstash_Format On
        Retry_Limit 10
        Type _doc 
        Time_Key @timestamp
        Replace_Dots On
        Logstash_Prefix kube-apiserver
        Generate_ID On
        Buffer_Size 2MB
        Trace_Output Off

  fluent-bit-stream-processor.conf: |
    [STREAM_TASK]
        Name   appstream
        Exec   CREATE STREAM appstream WITH (tag='logs') AS SELECT * from TAG:'kube.*' WHERE NOT (kubernetes['namespace_name']='ambassador-system' OR kubernetes['namespace_name']='argocd' OR kubernetes['namespace_name']='istio-system' OR kubernetes['namespace_name']='kube-system' OR kubernetes['namespace_name']='logging-system' OR kubernetes['namespace_name']='monitoring-system' OR kubernetes['namespace_name']='storage-system') ;
    [STREAM_TASK]
        Name   sysstream
        Exec   CREATE STREAM sysstream WITH (tag='sys') AS SELECT * from TAG:'kube.*' WHERE (kubernetes['namespace_name']='ambassador-system' OR kubernetes['namespace_name']='argocd' OR kubernetes['namespace_name']='istio-system' OR kubernetes['namespace_name']='kube-system' OR kubernetes['namespace_name']='logging-system' OR kubernetes['namespace_name']='monitoring-system' OR kubernetes['namespace_name']='storage-system') ;

  fluent-bit-service.conf: |
    [SERVICE]
        Flush        3
        Daemon       Off
        Log_Level    info
        Parsers_File parsers.conf
        Streams_File /fluent-bit/etc/fluent-bit-stream-processor.conf

  fluent-bit.conf: |
    @INCLUDE fluent-bit-service.conf
    @INCLUDE fluent-bit-input.conf
    @INCLUDE fluent-bit-filter.conf
    @INCLUDE fluent-bit-output.conf
    
  parsers.conf: |
    [PARSER]
        Name         docker
        Format       json
        Time_Key     time
        Time_Format  %Y-%m-%dT%H:%M:%S.%L
        Time_Keep    On

“kube-apiserver. ”的Merge_Log 关闭,到目前为止工作正常,尽管最终行为并不理想(没有进行字段映射)。 “kube.”的 Merge_Log已打开并按预期在 ES 中生成字段......但我们正在丢失日志。

我在 kubernetes 解析器中找到了导致此错误的相关代码,但我缺乏了解如何“修复”导致此消息的错误的知识https://github.com/fluent/fluent-bit/blob/master /plugins/filter_kubernetes/kubernetes.c#L162

这开始变得非常令人沮丧,我无法弄清楚为什么会发生这种情况,或者更好,如何解决它。 请问有什么帮助吗?


  1. 可能是我遗漏了 smth,但我找不到kube.*任何输出kube.*

我有同样的错误,启用后

[OUTPUT]
  ....
  Trace_Error on

在字段映射中弹性返回到 Fluentbit 冲突。

stderr F {"took":0,"errors":true,"items":[{"index":{"_index":"app-2022.01.02","_type":"_doc","_id":"H8keHX4BFLcmSeMefxLq","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse field [log_processed.pid] of type [long] in document with id 'H8keHX4BFLcmSeMefxLq'. Preview of field's value: '18:tid 140607188051712'","caused_by":{"type":"illegal_argument_exception","reason":"For input string: \"18:tid 140607188051712\""}}}}]}

我的 Elastic 中的索引映射具有类型为 long 的字段pid ,但我尝试从另一个[PARSER]推送文本值,一旦修复,问题就消失了。

在此处输入图片说明

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM