I'm looking to use Kafka as my event store/stream for orders, here are a few attributes:
So I'm trying to get my setup of Kafka correct so I can minimise the amount of work I have to do external to Kafka, so my concerns/questions are:
You might want to look into the "rack awareness" configuration for brokers, which helps with rack aware partition replication. This is mostly used to improve cross availability zone traffic, you can read about it more here. The gist of it, is that your consumers can fetch records from the "nearest" replica. In your case a consumer sitting in London might only fetch data from brokers in London, assuming you operate a single cross-region cluster.
Concerning latency: If you don't have any sub-seconds requirements, I would highly recommend to operate a single cluster instead of two. The latency between the east coast and the UK shouldn't be too bad. Keep it simple, Kafka is very robust and can handle most faults within a single cluster (eg a broker dying). Start with a single cluster in one location, you will still be able to add a second one and migrate your data over using mirror maker or a dedicated service.
This would also result in you not having the "same" topic twice for each region. Separate your topics based on their content, not their location. Otherwise you'll have lots of fun, when migrating the data format you use for orders. You want to be as flexible as possible for future changes.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.