I have one user bulk deleting some 50K documents from one container using a stored procedure .
Meanwhile another user is trying to login to the web app (connected to same cosmos db) but the request fails due to rate limit being exceeded.
What should be the best practice in this case in order to avoid service shortages like the one described?
a) Should I provision RUs by collection? b) Can I set a cap on the RU's consumed by bulk operations from code when making a request? c) is there any other approach?
More details on my current (naive/newbie) implementation:
It is recommended to not use Stored Procedures for bulk delete operations. Stored procedures only operate on the primary replica meaning they can only leverage 1/4 of total RU/s provisioned. You will get better throughput usage and more efficiency doing bulk operations using SDK client in Bulk Mode .
Whether you provision throughput at the database level or container level depends on a couple of things. If you have a large number of containers that get roughly the same number of requests and storage, database level throughput is fine. If the requests and storage is asymmetric then provision those containers which diverge greatly from the others with their own dedicated throughput. Learn more about the differences .
You cannot throttle requests on a container directly. You will need to implement Queue-based load leveling in your application.
Overall if you've provisioned 400 RU/s and trying to bulk delete 50K records, you are under provisioned and need to increase throughput. In addition, if you're workload is highly variable with long periods of little to no requests with short periods of high volume, you may want to consider using Serverless throughput or Autoscale
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.