简体   繁体   English

可能出现SDL_Net问题,并使用openCV从内存中显示YUV摄像机帧

[英]Possible SDL_Net issue and Using openCV to display a YUV camera frame from memory

I am having issues with getting an image from the camera on a raspberry pi over a network and on to a pandaboard(running ubuntu 12.04) to display correctly. 我在通过网络从树莓派上的相机获取图像并在pandaboard(运行ubuntu 12.04)上正确显示时遇到问题。 The data I get from the camera is the raw YUV data at 1280x720 resolution. 我从相机获得的数据是1280x720分辨率的原始YUV数据。

I think my SDL calls are fine, but here is the send code. 我认为我的SDL调用很好,但是这里是发送代码。 Anyone feel free to point out if they can see something clearly wrong. 任何人都可以指出是否可以看到明显错误的地方。

void Client::SendData(const void* buffer, int bufflen)
{
     /*
      Some code to check if connected to server and if socket is not null
     */



     if(SDLNet_TCP_Send(clientSocket, buffer, bufflen) < bufflen)
     {
         std::cerr << "SDLNet_TCP_Send: " << SDLNet_GetError() << std::endl;
         return;
     }
}

Now the recieve code 现在接收代码

void Server::ReceiveDataFromClient()
{
    /*
        code to check if data is being sent
    */
   //1382400 is the size of the image in bytes, before it is sent. This data 
   //is in bufflen in the send func and, to my knowledge, is correct. 
   if(SDLNet_TCP_Recv(clientSocket, buffer, 1382400) <=0)
   {
       std::cout << "Client disconnected" << std::endl;
       /*Code to shut down socket and socketset.*/
   }
   else //client is sending data
   {
       //buffer is an int* at the moment, I have tried it as a uint8_t* and a char*
       setUpOpenCVToDisplayChunk(buffer);
   }
}

So, I take buffer directly from Recv , which should only finish when Recv has got all the data from a single send as far as I know. 因此,我直接从Recv获取缓冲区,据我所知,只有在Recv从单个send了所有数据时,缓冲区才会结束。 I therefore think that code is fine, but its here incase anyone can spot any issues as I am struggling with this issue at the moment. 因此,我认为该代码很好,但是在此情况下,如果我目前正在努力解决此问题,则任何人都可以发现任何问题。

Lastly, my openCv display code: 最后,我的openCv显示代码:

void Server::setUpOpenCVToDisplayChunk(int* data)
{
    //I have tried different bit depths also
    IplImage* yImageHeader = cvCreateImageHeader(cvSize(1280, 720), IPL_DEPTH_8U, 1);

    //code to check yImage header is created correctly
    cvSetData(yImageHeader, data, yImageHeader->widthStep);
    cvNamedWindow("win1", CV_WINDOW_AUTOSIZE);
    cvShowImage("win1", yImageHeader);
}

Sorry for all the "code here to do this" parts, I am manually typing the code out. 对不起所有“此处要执行此操作的代码”部分,我正在手动键入代码。

So, can anyone state what could be the issue at either of these parts? 那么,任何人都可以说这两个部分的问题是什么? There is no error, I just get muddled up images, which I can notice are images, just wrongly put together or not full images. 没有错误,我只是弄乱了图像,我可以注意到是图像,只是错误地放在一起或不是完整的图像。

Anyone needs more info just ask or more code I will put it up. 任何人都需要更多信息,只需询问或提供更多代码即可。 Cheers. 干杯。

Try converting the frames from YUV to RGB. 尝试将帧从YUV转换为RGB。 http://en.wikipedia.org/wiki/YUV lists how YUV formatted data is converted to RGB. http://en.wikipedia.org/wiki/YUV列出了YUV格式的数据如何转换为RGB。 You might also find readily available code to do that. 您可能还会找到易于使用的代码来执行此操作。 Check the format of YUV data output from your camera and use the correct transformation. 检查从相机输出的YUV数据的格式,并使用正确的变换。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM