简体   繁体   中英

Confluent.Kafka - Topic Log Compaction

I'm currently building publisher and consumer assets using Confluent.Kafka and I'm trying to understand if there is anything different I need to do in code. I'm able to create the topic log compaction but I do not fully understand how to work with it in C# .NET Core.

My main ask is after creating a topic with log compaction enabled is there anything that must be done IN CODE to use it or is it all handled under the hood.

If there are code specific aspects to write does anyone have an example they can point me to? I've been looking into it for a couple of days and I find plenty of information on how to create a topic with log compaction enabled (which I've already achieved) but nothing on how that might affect code usage for the producer and consumer.

Any help would be much appreciated.

No, you don't need to make any changes to your code to use log compaction. To use log compaction, you only need to configure the topic.

The only thing different in code would be that you can delete events with a certain key by producing a tombstone value. Which in C# is just a null .

Make sure you really understand how log compaction works, you can read more about it here . To activate log compaction you must set the cleanup.policy=compact when creating the topic. But you must also consider other topic configurations which impact how often the topic is compacted: delete.retention.ms , segment.ms , min.cleanable.dirty.ratio .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM