简体   繁体   English

C ++ OpenCV摄像头流到HTML

[英]c++ opencv webcam stream to html

I am currently developing a project for my studies where I have to fetch a webcam stream, detect some objects and put some additional information on this stream. 我目前正在为我的研究开发一个项目,其中我必须获取网络摄像头流,检测一些对象并将一些其他信息添加到该流中。 This is all done on the server side. 这些都是在服务器端完成的。

Now I also have to provide the modified image of the stream to the clients. 现在,我还必须将流的修改后的图像提供给客户端。 The clients just open a HTML file with the following content: 客户端只是打开一个具有以下内容的HTML文件:

<html>
    <head>
        <title></title>
    </head>
    <body>
        <h1>It works!</h1>
        <video width="320" height="240" src="http://127.0.0.1:4711/videostream" type="video/quicktime" autoplay controls>
            Your browser does not support the video tag.
        </video>
    </body>
</html>

This will result in a HTTP request on the server for /videostream. 这将导致服务器上对/ videostream的HTTP请求。 To handle this request on the server side I will use Boost 1.56. 为了在服务器端处理此请求,我将使用Boost 1.56。

Currently each frame of my webcam stream is of type IplImage . 目前,我的网络摄像头流的每个帧都是IplImage类型。 Do I have to convert the IplImage into a video MIME-Typespecific format? 我是否必须将IplImage转换为特定于视频MIME类型的视频格式?

I have tried to figure it out myself, how the whole thing is working, but I couldn't get it. 我试图自己弄清楚整个过程是如何工作的,但我无法理解。 I used Wireshark to analyze the communication, but it doesn't make sense. 我使用Wireshark来分析通信,但这没有任何意义。 For testing purpose I have uploaded a video to my webspace and open the above file locally. 为了进行测试,我已将视频上传到我的网站空间并在本地打开上述文件。 The src of the video was the address of my webserver. 视频的src是我的网络服务器的地址。 First there is the TCP handshake stuff followed by this message: 首先是TCP握手内容,然后是此消息:

HTTP    765 GET /MOV_4198.MOV HTTP/1.1 

Followed the following message (it contains connection: Keep-Alive in the HTTP part): 跟随以下消息(它包含连接:HTTP部分中的Keep-Alive):

HTTP    279 HTTP/1.1 304 Not Modified 

Afterwards only TCP ACK and SYN folow, but no data. 之后,仅使用TCP ACK和SYN,但没有数据。 See the following picture: see picture 看下图看图

Where and how are the real data of the video sent? 视频的真实数据在哪里发送以及如何发送? What have I missed here? 我在这里错过了什么?

Would be great if you could provide me some information about the connection between the Browser (video-Tag) and the C++ socket connection. 如果您能向我提供有关浏览器(视频标签)和C ++套接字连接之间的连接的信息,那将是很好的。

Thank you, Stefan 谢谢Stefan

I want to share my made experiences - maybe it will help others too. 我想分享我的经验-也许它也会对其他人有帮助。 To get my stream from the webcam I used OpenCV 2.4.9 and as protocol I used mjpeg streaming protocol (see also MJPEG over HTTP ) - thanks to @berak - he mentioned MJPEG in his comment under my question-post. 为了从网络摄像头中获取流,我使用了OpenCV 2.4.9,并且作为协议,我使用了mjpeg流协议 (另请参见HTTP上的MJPEG )-感谢@berak-他在我的问题下方的评论中提到了MJPEG。

The following code just gives an overview - I do not go into threading details. 以下代码仅作概述-我不介绍线程详细信息。 Since this is a students project and we are using GitHub, you can find the whole source code here on GitHub - project Swank Rats I want to mention here, that I am not a C++, OpenCV or Boost guru. 由于这是一个学生项目,并且我们使用的是GitHub,因此您可以在GitHub上找到整个源代码-我想在这里提及的Swank Rats项目 ,因为我不是C ++,OpenCV或Boost专家。 This project is the first time I use all three of them. 这个项目是我第一次使用这三个项目。

Get you stream from your webcam 让您从网络摄像头流式传输

Do something like this (full code with threading and so search for WebcamService in the repo) 做这样的事情(带有线程的完整代码,因此在存储库中搜索WebcamService)

cv::VideoCapture capture();
cv::Mat frame;
while (1) {
    if (!capture.isOpened()) {
        break; //do some logging here or something else - webcam not available
    }

    //Create image frames from capture
    capture >> frame;

    if (!frame.empty()) {
        //do something with your image (e.g. provide it)
        lastImage = frame.clone();
    }
}

Provide your image via HTTP 通过HTTP提供图像

Well I don't go into detail how you create a HTTP server with C++. 好吧,我不会详细介绍如何使用C ++创建HTTP服务器。 There is a nice example provided by Boost for C++11 . Boost for C ++ 11提供了一个很好的示例。 I have copied this code and adapted it to my needs. 我已经复制了此代码,并对其进行了调整以满足我的需要。 You can find the source code of my implementation in the repo mentioned above. 您可以在上述回购中找到我的实现的源代码。 The code is currently located at infrastructure / networking / videostreaming. 该代码当前位于基础结构/网络/视频流。

There is no need to use FFMPEG, GStreamer or something simular. 无需使用FFMPEG,GStreamer或类似的东西。 You can create a in-memory JPEG using OpenCV like this (see code of StreamResponseHandler): 您可以像这样使用OpenCV创建内存中的JPEG(请参阅StreamResponseHandler的代码):

cv::Mat image = webcamService->GetLastImage();
// encode mat to jpg and copy it to content
std::vector<uchar> buf;
cv::imencode(".jpg", image, buf, std::vector<int>());

std::string content(buf.begin(), buf.end()); //this must be sent to the client

Thanks to @codeDr for his post here . 感谢@codeDr在这里帖子

The content variable represents the image in bytes, which will be sent to the client. content变量表示以字节为单位的图像,该图像将被发送到客户端。 You have to follow the protocol of MJPEG . 您必须遵循MJPEG协议

Connect to the server by using HTML 使用HTML连接到服务器

Something like this is enough (as mentioned here ) 这样的事情就足够了(如提到这里

<html>  
    <body>  
        <h1> Test for simple Webcam Live streaming </h1>  
        <img src="http://127.0.0.1:4711/videostream">
    </body>  
</html> 

You have to change the IP, port and so on to your server connection. 您必须更改服务器连接的IP,端口等。

I hope this helps. 我希望这有帮助。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM