[英]How can I use Kafka to retain logs in logstash for longer period?
Currently I use redis -> s3 -> elastic search -> kibana stack to pipe and visualise my logs.目前我使用 redis -> s3 -> elastic search -> kibana stack 来管道和可视化我的日志。 But due to large volume of data in elastic search I can retain logs upto 7 days.
但是由于弹性搜索中的大量数据,我可以将日志保留长达 7 天。
I want to bring kafka cluster in this stack and retain logs for more number of days.我想在这个堆栈中引入 kafka 集群并将日志保留更多天数。 I am thinking of following stack.
我正在考虑以下堆栈。
app nodes piping logs to kafka -> kafka cluster -> elastics search cluster -> kibana应用程序节点将日志传送到 kafka -> kafka 集群 -> 弹性搜索集群 -> kibana
How can I use kafka to retain logs for more number of days?如何使用 kafka 将日志保留更多天数?
Looking through the Apache Kafka broker configs , there are two properties that determine when a log will get deleted.查看 Apache Kafka代理配置,有两个属性可以确定何时删除日志。 One by time and the other by space.
一个是时间,一个是空间。
log.retention.{ms,minutes,hours}
log.retention.bytes
Also note that if both log.retention.hours and log.retention.bytes are both set we delete a segment when either limit is exceeded.
还要注意,如果 log.retention.hours 和 log.retention.bytes 都设置了,我们会在超过任一限制时删除一个段。
Those two dictate when logs are deleted in Kafka.这两个决定了何时在 Kafka 中删除日志。 The log.retention.bytes defaults to -1, and I'm pretty sure leaving it to -1 allows only the time config to solely determine when a log gets deleted.
log.retention.bytes 默认为 -1,我很确定将它保留为 -1 只允许时间配置来单独确定日志何时被删除。
So to directly answer your question, set log.retention.hours to however many hours you wish to retain your data and don't change the log.retention.bytes configuration.因此,要直接回答您的问题,请将 log.retention.hours 设置为您希望保留数据的小时数,并且不要更改 log.retention.bytes 配置。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.