I am working on a ad-hoc real-time stream processing framework which internally uses the java-chronicle library to exchange data between building blocks.
The chronicle uses disk space to store items appended to the queue and increases in space with each new message.
As I consume messages just for once - replay behavior is backed by kafka outside the processing elements - processed elements may be removed and thus disk space cleaned. Is there a way to free space consumed by chronicle files simply by removing entries from it?
The alternative approach would be to open new chronicles after a fixed number of messages plus keeping track on already consumed chronicles which are then removed from disk. ...but that does not seem to be a very smooth solution ;-)
So, my question is if there is an approach on removing processed/tail'ed entries from a chronicle?
You can now detect when a cycle is rotated and delete old files. eg once a day.
The assumption is that the cost of disk space is cheap, although this is not always correct.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.