简体   繁体   English

将数据推送到 Cosmos DB 时 Azure 流分析作业降级

[英]Azure Stream Analytics Job degrading while pushing data to cosmos DB

I Have data getting pushed from Azure IoT Hub -> Stream Analytics -> CosmosDB我有数据从 Azure IoT 中心推送 -> 流分析 -> CosmosDB

I had 1 simulated device and my cosmos DB collection was of 1000 RU/s working fine .我有 1 个模拟设备,我的 Cosmos DB 集合是 1000 RU/s 工作正常。 now i have made it 10 simulated devices and my Cosmos DB collection scaled to 15000 RU/s still my stream analytics getting degraded .现在我已经制作了 10 个模拟设备,我的 Cosmos DB 集合扩展到了 15000 RU/s,但我的流分析仍在退化。

Is there i need to increase number of parallel connections to collection ?我是否需要增加收集的并行连接数?

can we make it more optimal As Azure pricing of Cosmos DB , Depend on throughput and RU我们能否让它更优化作为 Cosmos DB 的 Azure 定价,取决于吞吐量和 RU

Can we make it more optimal as Azure pricing of Cosmos DB, depend on throughput and RUs?作为 Cosmos DB 的 Azure 定价,我们能否使其更加优化,取决于吞吐量和 RU?

I just want to share some thoughts with you about improving write performance of Cosmos db here.我只是想在这里与大家分享一些关于提高 Cosmos db 写入性能的想法。

1.Consistency Level 1.一致性级别

Based on the document :根据文件

Depending on what levels of read consistency your scenario needs against read and write latency, you can choose a consistency level on your database account.根据您的方案针对读写延迟所需的读取一致性级别,您可以选择数据库帐户的一致性级别。

You could try to set Consistency Level as Eventually .您可以尝试将 Consistency Level 设置为Eventually Details please refer to here .详情请参阅此处

2.Indexing: 2.索引:

Based on the document:根据文件:

by default, Azure Cosmos DB enables synchronous indexing on each CRUD operation to your collection.默认情况下,Azure Cosmos DB 对集合的每个 CRUD 操作启用同步索引。 This is another useful option to control the write/read performance in Azure Cosmos DB.这是控制 Azure Cosmos DB 中写入/读取性能的另一个有用选项。

Please try to set index lazy.请尝试将索引设置为惰性。 Also, remove useless index.另外,删除无用的索引。

3.Partition: 3.分区:

Based on the document :根据文件

Azure Cosmos DB unlimited are the recommended approach for partitioning your data, as Azure Cosmos DB automatically scales partitions based on your workload. Azure Cosmos DB 无限制是对数据进行分区的推荐方法,因为 Azure Cosmos DB 会根据工作负载自动缩放分区。 When writing to unlimited containers, Stream Analytics uses as many parallel writers as previous query step or input partitioning scheme.写入无限容器时,流分析使用与前一个查询步骤或输入分区方案一样多的并行写入器。

Please partition your collection and pass the partition key in output to improve write performance.请对您的集合进行分区并在输出中传递分区键以提高写入性能。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 没有数据从 Azure stream 分析推送到 Cosmos DB - No data getting pushed from Azure stream analytics to Cosmos DB Azure Stream Analytics输出到Azure Cosmos DB - Azure Stream Analytics output to Azure Cosmos DB 将集合的TTL设置为ON时,无法通过Azure流分析作业更新Cosmos DB集合 - Unable to Update Cosmos DB collection by Azure Stream Analytics Job when TTL is set as ON for the collection Azure Stream Analytics的工作对于小数据而言昂贵吗? - Azure Stream Analytics job expensive for small data? Azure Stream Analytics作业会截断数据 - Azure Stream Analytics job truncates data 错误代码:从 Azure 流分析作业采样数据时出现 BadArgument 错误消息 - Error code: BadArgument Error message while sampling data from Azure Stream Analytics Job 无法在 VS 代码中调试 Azure Stream Analytics Cosmos DB output - Can't debug Azure Stream Analytics Cosmos DB output in VS Code Azure Stream 分析:如果作业查询是一天明智的 TUMBLINGWINDOW,stream 分析作业何时实际处理数据? - Azure Stream Analytics: When does a stream analytics job actually process data if the job query is a day wise TUMBLINGWINDOW? 如何使用azure流分析查询语言更新cosmos db中的值? - How to update values in cosmos db using azure stream analytics query language? Azure ioT和Stream Analytics作业 - Azure ioT and Stream Analytics job
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM