简体   繁体   English

Azure邮件大小限制和IOT

[英]Azure Message size limit and IOT

I read through azure documentation and found that the message size limit of Queues is 64 Kb and Service Bus is 256 KB. 我通读了azure文档,发现Queue的消息大小限制为64 Kb,Service Bus为256 KB。 We are trying to develop an application which will read sensor data from the some devices, call a REST Service and upload it to cloud . 我们正在尝试开发一个应用程序,该应用程序将从某些设备读取传感器数据,调用REST服务并将其上传到云。 This data will be stored in the queues and then dumped in to a Cloud database. 此数据将存储在队列中,然后转储到Cloud数据库中。

There could be chances that the sensor data collected is more than 256 KB... In such cases what is the recommended approach... Do we need to split the data in the REST service and then put chunks of data in the queue or is there any other recommended pattern 收集的传感器数据可能有超过256 KB的情况。在这种情况下,推荐的方法是什么?我们需要在REST服务中拆分数据,然后将数据块放入队列中吗?还有其他推荐的模式

Any help is appreciated 任何帮助表示赞赏

You have several conflicting technology statements. 您有几种相互矛盾的技术陈述。 I will begin by clarifying a few. 我将先澄清一些。

  1. Service Bus/IoT Hub are not post calls. 服务总线/物联网中心不是邮政电话。 A post call would use a restful service, which exists separately. 邮寄呼叫将使用宁静的服务,该服务独立存在。 IoT Hub uses a low latency message passing system that is abstracted from you. IoT中心使用从您提取的低延迟消息传递系统。 These are intended to be high volume small packets and fits most IoT scenarios. 这些旨在用作大容量小数据包,适合大多数物联网场景。

  2. In the situation in which a message is larger than 256 KB (which is very interesting for an IoT scenario, I would be interested to see why those messages are so large), you should ideally upload to blob storage. 在消息大于256 KB的情况下(这对于IoT场景非常有趣,我很想知道为什么这些消息如此之大),理想情况下,您应该上传到Blob存储。 You can still post packets 您仍然可以发布数据包

    • If you have access to blob storage api's with your devices, you should go that route 如果您可以使用设备访问Blob存储API,则应遵循以下路线
    • If you do not have access to this, you should post big packets to a rest endpoint and cross your fingers it makes it or chop it up. 如果您无权访问,则应将大数据包发布到其余端点,并用手指交叉或切成小包。

      1. You can run post analytics on blob storage, I would recommend using the wasb prefix as those containers are Hadoop compliant and you can stand up analytics clusters on top of those storage mechanisms. 您可以在blob存储上运行后期分析,我建议使用wasb前缀,因为这些容器符合Hadoop,并且您可以在这些存储机制之上建立分析集群。

You have no real need for a queue that I can immediately see. 您并不需要我可以立即看到的队列。

You should take a look at the patterns leveraging: 您应该看看利用以下模式:

  1. Stream Analytics: https://azure.microsoft.com/en-us/services/stream-analytics/ 流分析: https : //azure.microsoft.com/en-us/services/stream-analytics/
  2. Azure Data Factory: https://azure.microsoft.com/en-us/services/data-factory/ Azure数据工厂: https : //azure.microsoft.com/zh-cn/services/data-factory/

Your typical ingestion will be: Get your data up into the cloud into super cheap storage as easily as possible and then deal with analytics later using clusters you can stand up and tear down on demand. 典型的摄取将是:尽可能轻松地将数据存储到云中并存储到超便宜的存储中,然后稍后使用可根据需要站起来拆散的群集来处理分析。 That cheap storage is typically blob and that analytics cluster is usually some form of Hadoop. 廉价的存储通常是blob,分析集群通常是某种形式的Hadoop。 Using data factory allows you to pipe your data around as you figure out what you are going to use specific components of it for. 使用数据工厂可让您在确定要使用数据的特定组件时使用其来传送数据。

Example of having used HBase as ingestion with cheap blob storage as the underlayment and Azure Machine Learning as part of my analytics solution: http://indiedevspot.com/2015/07/09/powering-azureml-with-hadoop-hbase/ 使用HBase作为廉价基础的blob存储作为基础,并且将Azure机器学习作为我的分析解决方案的一部分的示例: http : //indiedevspot.com/2015/07/09/powering-azureml-with-hadoop-hbase/

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM