簡體   English   中英

Filebeat 不會將日志發送到 logstash

[英]Filebeat does not send logs to logstash

所以這里是大圖:我的目標是使用 ELK 堆棧 + filebeat 索引大量 (.txt) 數據。

基本上,我的問題是filebeat 似乎無法將日志發送到logstash。 我的猜測是,某些 docker 網絡配置已關閉...

我的項目的代碼可在https://github.com/mhyousefi/elk-docker 獲得

麋鹿容器

為此,我有一個docker-compose.yml來從圖像sebp/elk運行一個容器,如下所示:

version: '2'

services:
  elk:
    container_name: elk
    image: sebp/elk
    ports:
      - "5601:5601"
      - "9200:9200"
      - "5045:5044"
    volumes:
      - /path/to/volumed-folder:/logstash
    networks:
      - elk_net

networks:
  elk_net:
    driver: bridge

創建容器后,我會轉到容器 bash 終端並運行以下命令:

/opt/logstash/bin/logstash --path.data /tmp/logstash/data -f /logstash/config/filebeat-config.conf

運行這個命令,我得到以下日志,然后它會開始等待而不打印任何進一步的日志:

$ /opt/logstash/bin/logstash --path.data /tmp/logstash/data -f /logstash/config/filebeat-config.conf                                                                                             
Sending Logstash's logs to /opt/logstash/logs which is now configured via log4j2.properties
[2018-08-14T11:51:11,693][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/tmp/logstash/data/queue"}
[2018-08-14T11:51:11,701][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/tmp/logstash/data/dead_letter_queue"}
[2018-08-14T11:51:12,194][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-08-14T11:51:12,410][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"3646b6e4-d540-4c9c-a38d-2769aef5a05e", :path=>"/tmp/logstash/data/uuid"}
[2018-08-14T11:51:13,089][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-14T11:51:15,554][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-14T11:51:16,088][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-14T11:51:16,101][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-14T11:51:16,291][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-14T11:51:16,391][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-14T11:51:16,398][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-14T11:51:16,460][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-08-14T11:51:16,515][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-08-14T11:51:16,559][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-08-14T11:51:16,688][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2018-08-14T11:51:16,899][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5045"}
[2018-08-14T11:51:16,925][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x54ab986e run>"}
[2018-08-14T11:51:17,170][INFO ][org.logstash.beats.Server] Starting server on port: 5045
[2018-08-14T11:51:17,187][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-08-14T11:51:17,637][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9601}

現在,這就是filebeat-config.conf樣子:

input {
  beats {
    port => "5044"
  }
}

output {
  elasticsearch {
    hosts => [ "localhost:9200" ]
    index => "%{[@metadata][beat]}"
  }
}

FILEBEAT 容器

我的filebeat容器是使用下面docker-compose.yml文件創建的:

version: "2"

services:
  filebeat:
    container_name: filebeat
    hostname: filebeat
    image: docker.elastic.co/beats/filebeat:6.3.0
    user: root
    # command: ./filebeat -c /usr/share/filebeat-volume/config/filebeat.yml -E name=mybeat
    volumes:
      # "volumed-folder" lies under ${PROJECT_DIR}/filebeat or could be anywhere else you wish
      - /path/to/volumed-folder:/usr/share/filebeat/filebeat-volume:ro
    networks:
      - filebeat_net

networks:
  filebeat_net:
    external: true

創建容器后,我會轉到容器 bash 終端,將/usr/share/filebeat下現有的filebeat.yml替換為我已卷的文件,然后運行命令:

./filebeat -e -c ./filebeat.yml -E name="mybeat"

終端立即顯示以下日志:

root@filebeat filebeat]# ./filebeat -e -c ./filebeat.yml -E name="mybeat"
2018-08-14T12:13:16.325Z        INFO    instance/beat.go:492    Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2018-08-14T12:13:16.325Z        INFO    instance/beat.go:499    Beat UUID: 3b4b3897-ef77-43ad-b982-89e8f690a96e
2018-08-14T12:13:16.325Z        INFO    [beat]  instance/beat.go:716    Beat info       {"system_info": {"beat": {"path": {"config": "/usr/share/filebeat", "data": "/usr/share/filebeat/data", "home": "/usr/share/filebeat", "logs": "/usr/share/filebeat/logs"}, "type": "filebeat", "uuid": "3b4b3897-ef77-43ad-b982-89e8f690a96e"}}}
2018-08-14T12:13:16.325Z        INFO    [beat]  instance/beat.go:725    Build info      {"system_info": {"build": {"commit": "a04cb664d5fbd4b1aab485d1766f3979c138fd38", "libbeat": "6.3.0", "time": "2018-06-11T22:34:44.000Z", "version": "6.3.0"}}}
2018-08-14T12:13:16.325Z        INFO    [beat]  instance/beat.go:728    Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":6,"version":"go1.9.4"}}}
2018-08-14T12:13:16.327Z        INFO    [beat]  instance/beat.go:732    Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2018-08-04T17:34:15Z","containerized":true,"hostname":"filebeat","ips":["127.0.0.1/8","172.28.0.2/16"],"kernel_version":"4.4.0-116-generic","mac_addresses":["02:42:ac:1c:00:02"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":5,"patch":1804,"codename":"Core"},"timezone":"UTC","timezone_offset_sec":0}}}
2018-08-14T12:13:16.328Z        INFO    [beat]  instance/beat.go:761    Process info    {"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"effective":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/filebeat", "exe": "/usr/share/filebeat/filebeat", "name": "filebeat", "pid": 93, "ppid": 28, "seccomp": {"mode":"filter"}, "start_time": "2018-08-14T12:13:15.530Z"}}}
2018-08-14T12:13:16.328Z        INFO    instance/beat.go:225    Setup Beat: filebeat; Version: 6.3.0
2018-08-14T12:13:16.329Z        INFO    pipeline/module.go:81   Beat name: mybeat
2018-08-14T12:13:16.329Z        WARN    [cfgwarn]       beater/filebeat.go:61   DEPRECATED: prospectors are deprecated, Use `inputs` instead. Will be removed in version: 7.0.0
2018-08-14T12:13:16.330Z        INFO    [monitoring]    log/log.go:97   Starting metrics logging every 30s
2018-08-14T12:13:16.330Z        INFO    instance/beat.go:315    filebeat start running.
2018-08-14T12:13:16.330Z        INFO    registrar/registrar.go:112      Loading registrar data from /usr/share/filebeat/data/registry
2018-08-14T12:13:16.330Z        INFO    registrar/registrar.go:123      States Loaded from registrar: 0
2018-08-14T12:13:16.331Z        WARN    beater/filebeat.go:354  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-08-14T12:13:16.331Z        INFO    crawler/crawler.go:48   Loading Inputs: 1
2018-08-14T12:13:16.331Z        INFO    log/input.go:111        Configured paths: [/usr/share/filebeat-volume/data/Shakespeare.txt]
2018-08-14T12:13:16.331Z        INFO    input/input.go:87       Starting input of type: log; ID: 1899165251698784346 
2018-08-14T12:13:16.331Z        INFO    crawler/crawler.go:82   Loading and starting Inputs completed. Enabled inputs: 1

每 30 秒,它會顯示以下內容:

2018-08-14T12:13:46.334Z        INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":24}},"total":{"ticks":30,"time":{"ms":36},"value":30},"user":{"ticks":10,"time":{"ms":12}}},"info":{"ephemeral_id":"16c484f0-0cf8-4c10-838d-b39755284af9","uptime":{"ms":30017}},"memstats":{"gc_next":4473924,"memory_alloc":3040104,"memory_total":3040104,"rss":21061632}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":6},"load":{"1":1.46,"15":1.52,"5":1.66,"norm":{"1":0.2433,"15":0.2533,"5":0.2767}}}}}}

而且 Kibana 中沒有創建索引模式。

這是我的filebeat.yml樣子:

filebeat.inputs:
- type: log
  paths:
    - /path/to/a/log/file

output.logstash:
  hosts: ["elk:5044"]

setup.kibana:
  host: "localhost:5601"

我已經使用這個 stackoverflow 問題來定義我的docker-compose文件的networks部分,以便我的容器可以使用它們的container_name來相互通信。

所以,當我做

output.logstash:
  hosts: ["elk:5044"]

我希望 filebeat 將日志發送到 elk 容器的端口 5044,其中 logstash 正在偵聽傳入消息。

在我的終端中運行 filebeat 后,我​​確實在終端中看到了以下日志,我在其中執行了 docker docker-compose up elk

elk    | 
elk    | ==> /var/log/elasticsearch/elasticsearch.log <==
elk    | [2018-08-14T11:51:16,974][INFO ][o.e.c.m.MetaDataIndexTemplateService] [fZr_LDR] adding template [logstash] for index patterns [logstash-*]

我假設在 logstash 和 filebeat 之間進行了某種通信。

但是,另一方面,盡管遵循了提到的 stackoverflow 響應,但我無法在我的 filebeat 容器中執行ping elk 主機名未解析。

我感謝任何幫助!

更新(2018 年 8 月 15 日)

我想我什至不需要為我的ELK容器打開一個端口。 發生的情況是Logstash正在偵聽容器內的端口 5044。 只要Filebeat容器內的filebeat.yml文件可以解析ELK主機,然后將其日志發送到那里的 5044 端口(“elk:5044”),它們應該都能正常工作。

這就是為什么我刪除了"5045:5044"行,並修復了我的Filebeat容器docker-compose.yml文件中的networks部分,以包含以下內容:

networks:
  filebeat_net:
    external:
      name: elk_elk_net

它似乎有效,因為當我執行ping elk ,我正在建立連接。

雖然網絡問題解決了(我可以ping通!),但LogstashFilebeat之間的連接仍然很麻煩,並且每30秒不斷收到以下消息。

2018-08-14T12:13:46.334Z        INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":24}},"total":{"ticks":30,"time":{"ms":36},"value":30},"user":{"ticks":10,"time":{"ms":12}}},"info":{"ephemeral_id":"16c484f0-0cf8-4c10-838d-b39755284af9","uptime":{"ms":30017}},"memstats":{"gc_next":4473924,"memory_alloc":3040104,"memory_total":3040104,"rss":21061632}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":6},"load":{"1":1.46,"15":1.52,"5":1.66,"norm":{"1":0.2433,"15":0.2533,"5":0.2767}}}}}}

在我的 filebeat 容器的終端中,在詳細模式下運行 filebeat 命令時,我還會定期獲取以下日志:

2018-08-15T16:26:41.986Z        DEBUG   [input] input/input.go:124      Run input
2018-08-15T16:26:41.986Z        DEBUG   [input] log/input.go:147        Start next scan
2018-08-15T16:26:41.986Z        DEBUG   [input] log/input.go:168        input states cleaned up. Before: 0, After: 0, Pending: 0

我終於能夠解決我的問題。 首先,如我的問題的更新(2018 年 8 月 15 日)部分所述,容器連接問題已得到解決。

Filebeat沒有將日志發送到Logstash是由於我沒有明確指定要啟用的輸入/輸出配置(這對我來說是一個令人沮喪的事實,因為文檔中沒有明確提到它)。 因此,更改我的filebeat.yml文件以下修復了訣竅。

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - ${PWD}/filebeat-volume/data/*.txt

output.logstash:
  enabled: true
  hosts: ["elk:5044"]
  index: "your cusotm index"

setup.kibana:
  host: "elk:5601"

默認情況下,在容器中的命名空間中聯網,這意味着每個容器都有自己的私有 ip,而容器中的 localhost 只是該容器的本地主機。

這意味着您需要在配置文件中指定彈性服務器的 DNS 條目,而不是 localhost。 在 compose 和 swarm 模式下,服務名稱會自動設置,其中的 DNS 條目指向您的容器:

input {
  beats {
    port => "5044"
  }
}

output {
  elasticsearch {
    hosts => [ "elk:9200" ]
    index => "%{[@metadata][beat]}"
  }
}

這還要求您在容器之間共享一個公共網絡。 在同一個撰寫文件中創建所有內容時,默認情況下您會得到這個。 當您部署了多個堆棧/項目時,您需要至少在一個文件中定義一個公共外部網絡。 由於我無法告訴您的 elk 項目名稱知道完整的網絡名稱,因此您可以對 elk 進行更改以將其連接到 filebeat_net:

version: '2'

services:
  elk:
    container_name: elk
    image: sebp/elk
    ports:
      - "5601:5601"
      - "9200:9200"
      - "5045:5044"
    volumes:
      - /path/to/volumed-folder:/logstash
    networks:
      - elk_net
      - filebeat_net

networks:
  elk_net:
    driver: bridge
  filebeat_net:
    external: true

我遇到了類似的問題,但發生在我身上的是我的端口沒有暴露給容器外的應用程序。 我所做的只是為其他應用程序公開端口。 我在使用選項 -p 5044 安裝 docker 時這樣做了,5044 是要偵聽請求的端口。

docker run -d --name logstash 
-p 5044:5044
--restart=always 
-e "XPACK.MONITORING.ELASTICSEARCH.URL=http://ELASTIC_IP:9200" 
docker.elastic.co/logstash/logstash:7.0.0

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM