简体   繁体   中英

Azure Cosmos db : requests exceeding rate limit when bulk deleting records

I have one user bulk deleting some 50K documents from one container using a stored procedure .

Meanwhile another user is trying to login to the web app (connected to same cosmos db) but the request fails due to rate limit being exceeded.

What should be the best practice in this case in order to avoid service shortages like the one described?

a) Should I provision RUs by collection? b) Can I set a cap on the RU's consumed by bulk operations from code when making a request? c) is there any other approach?

More details on my current (naive/newbie) implementation:

  1. Two collections : RawDataCollection and TransformedDataCollection
  2. Partition key values are the customer account number
  3. RU set at the database level (current dev deployment has minimum 400RUs)
  4. Bulk insert/delete actions are needed in both collections
  5. User profile data (for login purposes, etc.) is stored in RawDataCollection
  6. Bulk actions are low priority in terms of service level, meaning it could be put on hold or something if a higher priority task comes in.
  7. Normally when user logs in, retrieves small amounts of information. This is high priority in terms of service level.

It is recommended to not use Stored Procedures for bulk delete operations. Stored procedures only operate on the primary replica meaning they can only leverage 1/4 of total RU/s provisioned. You will get better throughput usage and more efficiency doing bulk operations using SDK client in Bulk Mode .

Whether you provision throughput at the database level or container level depends on a couple of things. If you have a large number of containers that get roughly the same number of requests and storage, database level throughput is fine. If the requests and storage is asymmetric then provision those containers which diverge greatly from the others with their own dedicated throughput. Learn more about the differences .

You cannot throttle requests on a container directly. You will need to implement Queue-based load leveling in your application.

Overall if you've provisioned 400 RU/s and trying to bulk delete 50K records, you are under provisioned and need to increase throughput. In addition, if you're workload is highly variable with long periods of little to no requests with short periods of high volume, you may want to consider using Serverless throughput or Autoscale

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM