[英]How to configure redis to act as message queue in elk and clear disk space as messages are consumed
I have ELK setup as belows 我有ELK设置如下
Kibana <-- ElasticSearch <-- Logstash <-- FileBeat (fetching logs from different log sources) Kibana < - ElasticSearch < - Logstash < - FileBeat(从不同的日志源获取日志)
This setup breaks down when message inflow is more. 当消息流入更多时,此设置会中断。 As much I have read on internet folks have recommended to use redis in this setup to make breathing space for ES to consume message.
正如我在互联网上阅读的那样,人们建议在此设置中使用redis为ES消耗信息腾出空间。 So I now wish to setup something like this
所以我现在希望设置这样的东西
Kibana <-- ElasticSearch <-- Logstash <-- REDIS <-- FileBeat (fetching logs from different log sources) Kibana < - ElasticSearch < - Logstash < - REDIS < - FileBeat(从不同的日志源获取日志)
I want Redis to act as intermediate to hold messages so that consumer end does not gets bottleneck. 我希望Redis充当中间人来保存消息,以便消费者端不会出现瓶颈。 But here the redis dump.rdb keeps on growing and once messages are consumed by logstash it is not getting shrinked back (not freeing up space).
但是这里的redis dump.rdb一直在增长,一旦logstash消息消息,它就不会收缩(不会释放空间)。 Below is my redis.conf
下面是我的redis.conf
bind host
port port
tcp-backlog 511
timeout 0
tcp-keepalive 0
daemonize no
supervised no
pidfile /var/run/redis.pid
loglevel notice
logfile "/tmp/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
EDIT: FileBeat config: 编辑:FileBeat配置:
filebeat:
prospectors:
-
paths:
- logPath
input_type: log
tail_files: true
output:
redis:
host: "host"
port: port
save_topology: true
index: "filebeat"
db: 0
db_topology: 1
timeout: 5
reconnect_interval: 1
shipper:
logging:
to_files: true
files:
path: /tmp
name: mybeat.log
rotateeverybytes: 10485760
level: warning
Logstash Config: Logstash配置:
input {
redis {
host => "host"
port => "port"
type => "redis-input"
data_type => "list"
key => "filebeat"
}
}
output {
elasticsearch {
hosts => ["hosts"]
manage_template => false
index => "filebeat-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
Let me know if more info is needed. 如果需要更多信息,请与我们联系。 TIA!!!
TIA!
I think your problem might be with the way how the messages are getting store and retrieved from Redis. 我认为您的问题可能与消息从Redis存储和检索的方式有关。
Ideally you should use List
data structure of Redis, use LPUSH
and LPOP
to insert and retrieve messages respectively. 理想情况下,您应该使用Redis的
List
数据结构,使用LPUSH
和LPOP
分别插入和检索消息。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.