[英]How can I reduce Kafka log file size of Alpakka
I am doing data replication in alpakka using Consumer.commitableSource. 我正在使用Consumer.commitableSource在alpakka中进行数据复制。 But, the size of kafka log file is increases very quickly. 但是,kafka日志文件的大小增长非常快。 The size reaches 5 gb in a day. 大小一天达到5 GB。 As a solution of this problem, ı want to delete processed data immediately. 作为此问题的解决方案,我想立即删除已处理的数据。 I am using delete record method in AdminClient to delete offset. 我在AdminClient中使用删除记录方法来删除偏移量。 But when I look at the log file, data corresponding to that offset is not deleted. 但是当我查看日志文件时,对应于该偏移量的数据不会被删除。
When using commitableSource
you need to acknowledge that the record has been successfully read and is ready to be cleaned by committing the offset. 使用commitableSource
您需要确认记录已被成功读取,并且可以通过提交偏移量来进行清理。 You can do that by calling commitJavadsl()
. 您可以通过调用commitJavadsl()
。 Take a look at the example in the documentation for more information. 查看文档中的示例以获取更多信息。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.