![](/img/trans.png)
[英]How to configure authorization with certificate with FluentD and ELK ElasticStack
[英]How to configure redis to act as message queue in elk and clear disk space as messages are consumed
我有ELK設置如下
Kibana < - ElasticSearch < - Logstash < - FileBeat(從不同的日志源獲取日志)
當消息流入更多時,此設置會中斷。 正如我在互聯網上閱讀的那樣,人們建議在此設置中使用redis為ES消耗信息騰出空間。 所以我現在希望設置這樣的東西
Kibana < - ElasticSearch < - Logstash < - REDIS < - FileBeat(從不同的日志源獲取日志)
我希望Redis充當中間人來保存消息,以便消費者端不會出現瓶頸。 但是這里的redis dump.rdb一直在增長,一旦logstash消息消息,它就不會收縮(不會釋放空間)。 下面是我的redis.conf
bind host
port port
tcp-backlog 511
timeout 0
tcp-keepalive 0
daemonize no
supervised no
pidfile /var/run/redis.pid
loglevel notice
logfile "/tmp/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
編輯:FileBeat配置:
filebeat:
prospectors:
-
paths:
- logPath
input_type: log
tail_files: true
output:
redis:
host: "host"
port: port
save_topology: true
index: "filebeat"
db: 0
db_topology: 1
timeout: 5
reconnect_interval: 1
shipper:
logging:
to_files: true
files:
path: /tmp
name: mybeat.log
rotateeverybytes: 10485760
level: warning
Logstash配置:
input {
redis {
host => "host"
port => "port"
type => "redis-input"
data_type => "list"
key => "filebeat"
}
}
output {
elasticsearch {
hosts => ["hosts"]
manage_template => false
index => "filebeat-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
如果需要更多信息,請與我們聯系。 TIA!
我認為您的問題可能與消息從Redis存儲和檢索的方式有關。
理想情況下,您應該使用Redis的List
數據結構,使用LPUSH
和LPOP
分別插入和檢索消息。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.