简体   繁体   中英

Filebeat is not sending logs to logstash on kubernetes

I'm trying to send kubernetes' logs with Filebeat and Logstash. I do have some deployment on the same namespace.

I tried the suggested configuration for filebeat.yml from elastic in this [link].( https://raw.githubusercontent.com/elastic/beats/7.x/deploy/kubernetes/filebeat-kubernetes.yaml )

So, this is my overall configuration:

filebeat.yml

filebeat.inputs:
    - type: container
      paths:
        - '/var/lib/docker/containers/*.log'
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      node: ${NODE_NAME}
    #      hints.enabled: true
    #      hints.default_config:
    #        type: container
    #        paths:
    #          - /var/log/containers/*${data.kubernetes.container.id}.log
    
    output.logstash:
      hosts: ['logstash.default.svc.cluster.local:5044']

Logstash Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash-deployment
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: logstash
  template:
    metadata:
      labels:
        app: logstash
    spec:
      containers:
      - name: logstash
        image: docker.elastic.co/logstash/logstash:7.15.0
        ports:
        - containerPort: 5044
        volumeMounts:
          - name: config-volume
            mountPath: /usr/share/logstash/config
          - name: logstash-pipeline-volume
            mountPath: /usr/share/logstash/pipeline
      volumes:
      - name: config-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.yml
              path: logstash.yml
      - name: logstash-pipeline-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.conf
              path: logstash.conf

Logstash Configmap

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-configmap
  namespace: default
data:
  logstash.yml: |
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }
      output {
        tcp {
          mode => "client"
          host => "10.184.0.4"
          port => 5001
          codec => "json_lines"
        }
        stdout {
          codec => rubydebug
        }
    }

Logstash Service

kind: Service
apiVersion: v1
metadata:
  name: logstash
  namespace: default
spec:
  selector:
    app: logstash
  ports:
  - protocol: TCP
    port: 5044
    targetPort: 5044

Filebeat daemonset are running, also the Logstash deployment. Both of them kubectl logs shows:

Filebeat daemonset shows

2021-10-13T04:10:14.201Z    INFO    instance/beat.go:665    Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2021-10-13T04:10:14.219Z    INFO    instance/beat.go:673    Beat ID: b90d1561-e989-4ed1-88f9-9b88045cee29
2021-10-13T04:10:14.220Z    INFO    [seccomp]   seccomp/seccomp.go:124  Syscall filter successfully installed
2021-10-13T04:10:14.220Z    INFO    [beat]  instance/beat.go:1014   Beat info   {"system_info": {"beat": {"path": {"config": "/usr/share/filebeat", "data": "/usr/share/filebeat/data", "home": "/usr/share/filebeat", "logs": "/usr/share/filebeat/logs"}, "type": "filebeat", "uuid": "b90d1561-e989-4ed1-88f9-9b88045cee29"}}}
2021-10-13T04:10:14.220Z    INFO    [beat]  instance/beat.go:1023   Build info  {"system_info": {"build": {"commit": "9023152025ec6251bc6b6c38009b309157f10f17", "libbeat": "7.15.0", "time": "2021-09-16T03:16:09.000Z", "version": "7.15.0"}}}
2021-10-13T04:10:14.220Z    INFO    [beat]  instance/beat.go:1026   Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.16.6"}}}
2021-10-13T04:10:14.221Z    INFO    [beat]  instance/beat.go:1030   Host info   {"system_info": {"host": {"architecture":"x86_64","boot_time":"2021-10-06T19:41:55Z","containerized":true,"name":"filebeat-hvqx4","ip":["127.0.0.1/8","10.116.6.42/24"],"kernel_version":"5.4.120+","mac":["ae:ab:28:37:27:2a"],"os":{"type":"linux","family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":9,"patch":2009,"codename":"Core"},"timezone":"UTC","timezone_offset_sec":0,"id":"38c2fd0d69ba05ae64d8a4d4fc156791"}}}
2021-10-13T04:10:14.221Z    INFO    [beat]  instance/beat.go:1059   Process info    {"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"effective":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/filebeat", "exe": "/usr/share/filebeat/filebeat", "name": "filebeat", "pid": 8, "ppid": 1, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2021-10-13T04:10:12.819Z"}}}
2021-10-13T04:10:14.221Z    INFO    instance/beat.go:309    Setup Beat: filebeat; Version: 7.15.0
2021-10-13T04:10:14.222Z    INFO    [publisher] pipeline/module.go:113  Beat name: filebeat-hvqx4
2021-10-13T04:10:14.224Z    WARN    beater/filebeat.go:178  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2021-10-13T04:10:14.225Z    INFO    [monitoring]    log/log.go:142  Starting metrics logging every 30s
2021-10-13T04:10:14.225Z    INFO    instance/beat.go:473    filebeat start running.
2021-10-13T04:10:14.227Z    INFO    memlog/store.go:119 Loading data file of '/usr/share/filebeat/data/registry/filebeat' succeeded. Active transaction id=0
2021-10-13T04:10:14.227Z    INFO    memlog/store.go:124 Finished loading transaction log file for '/usr/share/filebeat/data/registry/filebeat'. Active transaction id=0
2021-10-13T04:10:14.227Z    WARN    beater/filebeat.go:381  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2021-10-13T04:10:14.228Z    INFO    [registrar] registrar/registrar.go:109  States Loaded from registrar: 0
2021-10-13T04:10:14.228Z    INFO    [crawler]   beater/crawler.go:71    Loading Inputs: 1
2021-10-13T04:10:14.228Z    INFO    beater/crawler.go:148   Stopping Crawler
2021-10-13T04:10:14.228Z    INFO    beater/crawler.go:158   Stopping 0 inputs
2021-10-13T04:10:14.228Z    INFO    beater/crawler.go:178   Crawler stopped
2021-10-13T04:10:14.228Z    INFO    [registrar] registrar/registrar.go:132  Stopping Registrar
2021-10-13T04:10:14.228Z    INFO    [registrar] registrar/registrar.go:166  Ending Registrar
2021-10-13T04:10:14.228Z    INFO    [registrar] registrar/registrar.go:137  Registrar stopped
2021-10-13T04:10:44.229Z    INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s    {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"cfs":{"period":{"us":100000}},"id":"/"},"cpuacct":{"id":"/","total":{"ns":307409530}},"memory":{"id":"/","mem":{"limit":{"bytes":209715200},"usage":{"bytes":52973568}}}},"cpu":{"system":{"ticks":80,"time":{"ms":85}},"total":{"ticks":270,"time":{"ms":283},"value":270},"user":{"ticks":190,"time":{"ms":198}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"f5abb082-a094-4f99-a046-bc183d415455","uptime":{"ms":30208},"version":"7.15.0"},"memstats":{"gc_next":19502448,"memory_alloc":10052000,"memory_sys":75056136,"memory_total":55390312,"rss":112922624},"runtime":{"goroutines":12}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0},"type":"logstash"},"pipeline":{"clients":0,"events":{"active":0},"queue":{"max_events":4096}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":2},"load":{"1":0.14,"15":0.28,"5":0.31,"norm":{"1":0.07,"15":0.14,"5":0.155}}}}}}

Logtash deployment logs shows:

Using bundled JDK: /usr/share/logstash/jdk
warning: no jvm.options file found
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2021-10-13 08:46:58.674 [main] runner - Starting Logstash {"logstash.version"=>"7.15.0", "jruby.version"=>"jruby 9.2.19.0 (2.5.8) 2021-06-15 55810c552b OpenJDK 64-Bit Server VM 11.0.11+9 on 11.0.11+9 +jit [linux-x86_64]"}
[INFO ] 2021-10-13 08:46:58.698 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2021-10-13 08:46:58.700 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[WARN ] 2021-10-13 08:46:59.077 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2021-10-13 08:46:59.097 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"7a0e5b89-70a1-4004-b38e-c31fadcd7251", :path=>"/usr/share/logstash/data/uuid"}
[INFO ] 2021-10-13 08:47:00.950 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2021-10-13 08:47:01.468 [Converge PipelineAction::Create<main>] Reflections - Reflections took 203 ms to scan 1 urls, producing 120 keys and 417 values 
[WARN ] 2021-10-13 08:47:02.496 [Converge PipelineAction::Create<main>] plain - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-10-13 08:47:02.526 [Converge PipelineAction::Create<main>] beats - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-10-13 08:47:02.664 [Converge PipelineAction::Create<main>] jsonlines - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[INFO ] 2021-10-13 08:47:02.947 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x3b822f13@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}
[INFO ] 2021-10-13 08:47:05.467 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>2.52}
[INFO ] 2021-10-13 08:47:05.473 [[main]-pipeline-manager] beats - Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2021-10-13 08:47:05.555 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2021-10-13 08:47:05.588 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2021-10-13 08:47:05.907 [[main]<beats] Server - Starting server on port: 5044

So, my questions are:

  1. Why is Filebeat not ingesting the logs from kubernetes?
  2. Are there different ways to use hosts logstash on filebeat.yml? Because some examples are using DNS name just like my conf. when others are just using service names.
  3. How to trigger/test logs to make sure my configuration is running well?

My mistake, on filebeat environment I missed initiating the ENV node name. So, from the configuration above I just added

 - name: NODE_NAME
     valueFrom:
       fieldRef:
         fieldPath: spec.nodeName

the filebeat running well now

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM