简体   繁体   中英

DynamoDB large transaction writes are timing out

I have a service that receives events that vary in size from ~5 - 10k items. We split these events up into chunks and these chunks need to be written in transactions because we do some post-processing that depends on a successful write of all the items in the chunk. Ordering of the events is important so we can't Dead Letter them to process at a later time. We're running into an issue where we receive very large (10k) events and they're clogging up the event processor causing a timeout (currently set to 15s). I'm trying to find a way to increase the processing speed of these large events to eliminate timeouts.

I'm open to ideas but curious if there are there any pitfalls of running transaction writes concurrently? Eg splitting the event into chunks of 100 and having X threads run through them to write to dynamo concurrently.

There is no concern on multi-threading writes to DynamoDB so long as you have the capacity to handle the extra throughput.

I would also advise at trying smaller batches, as with 100 items in a batch, if one happens to fail for any reason then they all fail. Typically I suggest aiming for batch sizes of approx 10. But of course this depends on your use-case.

Also ensure that no threads are targeting the same item at the same time, as this would result in conflicting writes causing large amounts of failed batches.

In summary, batch small as possible, ensure your table has adequate capacity and ensure you don't hit the same items concurrently.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM