简体   繁体   中英

Azure Cosmos DB: 'Request rate is large' for simple count query

I'm using Cosmos DB with the Mongo adapter, accessing via the Ruby mongo driver . Currently there are about 2.5M records in the db.

When querying the total amount of records, there's no problem whatsoever:

2.2.5 :011 > mongo_collection.count
D, [2017-11-24T11:52:39.796716 #9792] DEBUG -- : MONGODB | XXX.documents.azure.com:10255 | admin.count | STARTED | {"count"=>"xp_events", "query"=>{}}
D, [2017-11-24T11:52:39.954645 #9792] DEBUG -- : MONGODB | XXX.documents.azure.com:10255 | admin.count | SUCCEEDED | 0.15778699999999998s
 => 2565825

But when I try to count the amount of records found based on a simple where, I run into the Request rate is large error:

2.2.5 :014 > mongo_collection.find(some_field: 'some_value').count
D, [2017-11-24T11:56:11.926812 #9792] DEBUG -- : MONGODB | XXX.documents.azure.com:10255 | admin.count | STARTED | {"count"=>"some_table", "query"=>{"some_field"=>"some_value"}}
D, [2017-11-24T11:56:24.629659 #9792] DEBUG -- : MONGODB | XXX.documents.azure.com:10255 | admin.count | FAILED | Message: {"Errors":["Request rate is large"]}
ActivityId: 0000-0000-0000-000000000000, Request URI: /apps/XXX/services/XXX/partitions/XXX/replicas/XXX/, RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.17.101.1 (16500), Message: {"Errors":["Request rate is large"]}

I understand how the error works, but I don't understand how such a query can max out the RU/s (set at max value 10,000), since the field I'm querying is supposed to be indexed (automatically).

Any advice would be greatly appreciated!

The error is by design, it means that an application is sending request to DocumentDB service at a rate that is higher than the 'reserved throughput' level for a collection tier.

The solution is that retry same request after some time. For more solutions check this article .

I ran into this today. As others suggest, the Azure services are regulated by the price you are willing to pay. I found an easy answer that costs just a bit more money.

I logged into Azure, found the Cosmos DB item, opened the database and found the collection. Each collection has an option for "Scale". In there, I raised the limit for uploads from previous setting of 1,000 to the maximum of 10,000 for that one collection. I ran the program, all documents were updated smoothly in about 5 minutes, and then in Azure I turned the limit back to 1,000.

The daily price jumped from $1.20 to $19.20 for about 10 minutes, otherwise all good.

It would take me an hour or two to decipher all of the steps to re-run uploads and another few hours to make sure the collection is correct after that.

You must raise the limit of connections. By default it is at 1000, I left it at 3000 and it stopped failing.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM