简体   繁体   English

如何在filebeat中立即将日志推送到elasticsearch?

[英]How to push logs to elasticsearch in filebeat instantly?

hear is my filebeat.yml听到是我的 filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - ../typescript/rate-limit-test/logs/*.log
  json.message_key: "message"
  json.keys_under_root: true
  json.overwrite_keys: true
  scan_frequency: 1s

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 1

logging.level: debug

output.elasticsearch:
  hosts: ["34.97.108.113:9200"]
  index: "filebeat-%{+yyyy-MM-dd}"
setup.template:
  name: 'filebeat'
  pattern: 'filebeat-*'
  enabled: true
setup.template.overwrite: true
setup.template.append_fields:
- name: time
  type: date

processors:
  - drop_fields:
      fields: ["agent","host","ecs","input","log"]

setup.ilm.enabled: false`

I changed scan_frequncy but elasticsearch couldn't get logs faster我更改了 scan_frequency 但 elasticsearch 无法更快地获取日志

How can i get logs in elasticsearch instantly?如何立即在 elasticsearch 中获取日志?

Please help me..请帮我..

There will be never an 'instantly' available logline in elasticsearch.在elasticsearch 中永远不会有“即时”可用的日志行。 The file needs to be watched for a considerable amount of changes or time, then the newly added lines need to be sent to elasticsearch in a bulk request and indexed into the appropriate shard on the correct cluster node.文件需要观察大量的更改或时间,然后新添加的行需要在批量请求中发送到 elasticsearch 并索引到正确集群节点上的适当分片中。 Network latency, TLS, authentification + authorization, concurrent write/search load: all the things affects the 'instantly' experience.网络延迟、TLS、身份验证 + 授权、并发写入/搜索负载:所有这些都会影响“即时”体验。

The speed of log ingestion and NRT (near-real-time search) depends on many factors and configuration options in elasticsearch and filebeat.日志摄取和 NRT(近实时搜索)的速度取决于 elasticsearch 和 filebeat 中的许多因素和配置选项。

Regarding tuning elasticsearch for indexing speed, have a look at this documentation , and apply what you have missed yet.关于为索引速度调整弹性搜索,请查看此文档,并应用您尚未错过的内容。 A brief overview:简要概述:

  • Disable swapping and enable memory locking ( bootstrap.memory_lock: true )禁用交换并启用内存锁定( bootstrap.memory_lock: true
  • Consider reducing index.refresh_interval (defaults to 1s) for the index in order to have the docs flushed more often (produces more IO in the cluster)考虑减少index.refresh_interval (默认为 1s),以便更频繁地刷新文档(在集群中产生更多的 IO)

For Filebeat, there is also good documentation about tuning , but in general, I see the following options:对于 Filebeat, 也有关于 Tuning 的很好的文档,但总的来说,我看到以下选项:

  • Try different output.elasticsarch.bulk_max_size values (defaults to a batch size of 50) and monitor the ingestion speed.尝试不同的output.elasticsarch.bulk_max_size值(默认批量大小为 50)并监控摄取速度。 For each cluster configuration, there are different optimal settings.对于每个集群配置,都有不同的最佳设置。
  • In high load scenarios, when the logs are written fast, consider increasing the number of workers output.elasticsarch.workers (defaults to 1)高负载场景,当日志写入速度快时,考虑增加output.elasticsarch.workers数量output.elasticsarch.workers (默认为1)
  • In the opposite scenario, having just a few log lines being written, consider increasing the close_inactive and scan_frequency value for the harvester.在相反的情况下,仅写入几行日志,请考虑增加收割机的close_inactivescan_frequency值。 Specifying a more suitable backoff will have an effect on how aggressively Filebeat checks files for updates.指定更合适的backoff将影响 Filebeat 检查文件更新的积极程度。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM