简体   繁体   English

在流式传输时如何处理Azure Media Service Live Stream视频?

[英]How to perform processing of Azure Media Service Live Stream video as it's being streamed?

I am creating an application which takes video from a camera hosted on the web, runs it through a computer vision algorithm to detect humans (written in C# using EmguCV's OpenCV wrapper) and streams the processed video to an ASP.NET client. 我正在创建一个应用程序,该应用程序从网络上托管的摄像头获取视频,通过计算机视觉算法运行该视频以检测人类(使用EmguCV的OpenCV包装器以C#编写)并将处理后的视频流式传输到ASP.NET客户端。

The process I believed would work was to have Azure Media Services create a live stream channel for the video, and somewhere in the process inject my code to process the video. 我认为可以使用的过程是让Azure Media Services为视频创建一个实时流通道,然后在过程中的某个位置注入我的代码来处理视频。 The algorithm uses a SQL database for much of its decision making, and so I thought to put it in a WebJob and have it process video as it is put in storage. 该算法的大部分决策过程都使用SQL数据库,因此我想将其放在WebJob中,并在存储时处理视频。 I would much rather process it somewhere in the Azure Media Services process, instead of using a WebJob. 我宁愿在Azure媒体服务过程中的某个地方处理它,而不是使用WebJob。

My question is: is there a way to process the video as it is coming in, so what is seen in storage is the processed video with boxes around the people (boxes placed by my algorithm which takes a frame as input and outputs a frame)? 我的问题是:有没有一种方法可以处理传入的视频,因此在存储中看到的是经过处理的视频,其中的人周围都有盒子(我的算法放置的盒子以一帧为输入并输出一帧) ? If so, where can I put my logic to do this, in the encoder setup? 如果是这样,在编码器设置中,我该放在哪里做我的逻辑?

Also, if you have another way of doing it please let me know! 另外,如果您还有另一种方法,请告诉我! I am open to ideas! 我愿意接受想法! I plan on scaling this app to use more than one camera as input, and the client should be able to switch between feeds. 我计划扩展此应用程序以使用多个摄像头作为输入,并且客户端应该能够在Feed之间进行切换。 This is off topic from my question but is a consideration. 这不是我的问题,但需要考虑。 I know it is possible to have a WebJob take the video out of storage, process it, and put it back, but the app loses the "Live" aspect then. 我知道有可能让WebJob从存储中取出视频,对其进行处理,然后再放回去,但是该应用程序随后失去了“实时”功能。

Technology Stack: Azure SQL DB created Azure Website created Azure Media Services and Storage created Possible Azure WebJob to handle algorithm? 技术堆栈:Azure SQL DB创建了Azure网站创建了Azure媒体服务和存储创建了可能的Azure WebJob处理算法?

Thank you so much in advance for any help! 提前非常感谢您的帮助!

As of now Azure Media Services is not allowing to plug in user defined code into processing pipeline. 到目前为止,Azure Media Services不允许将用户定义的代码插入处理管道。 You can select existing processor or utilize 3d party encoders which are currently presented through azure marketplace. 您可以选择现有的处理器,也可以使用目前在天青市场上展示的3d派对编码器。

For now(based on requirements you have) i think you need to have a proxy VM which doing face recognition of incoming stream and redirect processed stream to Azure Media Services live channel. 目前(根据​​您的要求),我认为您需要具有代理VM,该VM可以对传入流进行人脸识别,并将处理后的流重定向到Azure Media Services实时通道。 NGIX web server + ffmpeg + OpenCV can be a good solution to look into. NGIX Web服务器+ ffmpeg + OpenCV是一个很好的解决方案。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM