简体   繁体   English

从视频稳定程序将C / ++ OpenCV程序更改为CUDA

[英]Changing a C/++ OpenCV program to CUDA from video stabilization program

I am doing a C++ video stabilization/ anti-shaking program which: - Gets points of interest on the frame of reference (using FAST, SURF, Shi-Matoshi or SIFT, might try a few more) - Calculate the Lucas-Kanade Optical flow with calcOpticalFlowPyrLK - Gets the homography matrix - Corrects the shaking image using warPerspective (see code below) 我正在做一个C ++视频稳定/防抖程序,该程序:-在参考系上获取兴趣点(使用FAST,SURF,Shi-Matoshi或SIFT,可能再尝试一些)-计算Lucas-Kanade光学流量与calcOpticalFlowPyrLK一起使用-获取单应性矩阵-使用warPerspective校正抖动图像(请参见下面的代码)

//Calculate the Lucas Kanade optical flow
calcOpticalFlowPyrLK(original, distorted, refFeatures, currFeatures, featuresFound, err);   

//Find the homography between the current frame's features and the reference ones's
if(homographyRansac){
    homography = findHomography(currFeatures, refFeatures, CV_RANSAC); /*CV_RANSAC: Random sample consensus (RANSAC) is an iterative method to
    estimate parameters of a mathematical model from a set of observed data which contains outliers */
}else{
    homography = findHomography(currFeatures, refFeatures, 0);
}


//We use warpPerspective once on the distorted image to get the resulting fixed image
if(multiChannel){
    //Spliting into channels        
    vector <Mat> rgbChannels(channels), fixedChannels;
    split(distortedCopy, rgbChannels);
    recovered = Mat(reSized, CV_8UC3);
    //We apply the transformation to each channel
    for(int i = 0; i < channels; i ++){
        Mat tmp;
        warpPerspective(rgbChannels[i], tmp, homography, reSized);
        fixedChannels.push_back(tmp);
    }
    //Merge the result to obtain a 3 channel corrected image
    merge(fixedChannels, recovered);
}else{
    warpPerspective(distorted, recovered, homography, reSized);
}

If you have any alternative to my stabilization solution, feel free to say so, but it's not this thread's topic. 如果您有除我的稳定解决方案以外的任何选择,请随意说,但这不是本主题的主题。

Since all this takes a lot of time (around 300ms per frame on my i5 computer, so a VERY long time for a 30 min video) I am considering using CUDA to speed things up. 由于所有这些操作都花费大量时间(在我的i5计算机上,每帧大约需要300ms,因此对于30分钟的视频来说,这是很长的时间),因此我正在考虑使用CUDA来加快速度。 I've installed it and go it working, however I'm not sure as to how to proceed next. 我已经安装了它并使其正常工作,但是我不确定下一步如何进行。 I've done some test and I know the most time consuming operations are getting the optical flow and the frame correction using respectivly calcOpticalFlowPyrLK and warpPerspective. 我已经进行了一些测试,并且我知道最耗时的操作是分别使用calcOpticalFlowPyrLK和warpPerspective获得光流和帧校正。 So ideally, at least at first, I would only use the CUDA versions of these two function, leaving the rest unchanged. 因此,理想情况下,至少在一开始,我只会使用这两个功能的CUDA版本,其余的保持不变。

Is this possible? 这可能吗? Or do I need to re-write everything? 还是我需要重写所有内容?

Thanks 谢谢

Since OpenCV 3.0, a CUDA implementation of video stabilization is available. 从OpenCV 3.0开始,可以使用CUDA实现视频稳定。 It is recommended to use the already available implementation instead of writing your own unless you are sure that your version would be better or faster. 建议使用已经可用的实现,而不是编写自己的实现,除非您确定版本会更好或更快速。

Here is a minimal code demonstrating how to use the OpenCV video stabilization module to stabilize a video. 这是最少的代码,演示如何使用OpenCV视频稳定模块来稳定视频。

#include <opencv2/highgui.hpp>
#include <opencv2/videostab.hpp>

using namespace cv::videostab;

int main()
{
    std::string videoFile = "shaky_video.mp4";

    MotionModel model = cv::videostab::MM_TRANSLATION; //Type of motion to compensate
    bool use_gpu = true; //Select CUDA version or "regular" version

    cv::Ptr<VideoFileSource> video = cv::makePtr<VideoFileSource>(videoFile,true);
    cv::Ptr<OnePassStabilizer> stabilizer = cv::makePtr<OnePassStabilizer>();

    cv::Ptr<MotionEstimatorBase> MotionEstimator = cv::makePtr<MotionEstimatorRansacL2>(model);

    cv::Ptr<ImageMotionEstimatorBase> ImageMotionEstimator;

    if (use_gpu)
        ImageMotionEstimator = cv::makePtr<KeypointBasedMotionEstimatorGpu>(MotionEstimator);
    else
        ImageMotionEstimator = cv::makePtr<KeypointBasedMotionEstimator>(MotionEstimator);

    stabilizer->setFrameSource(video);
    stabilizer->setMotionEstimator(ImageMotionEstimator);
    stabilizer->setLog(cv::makePtr<cv::videostab::NullLog>()); //Disable internal prints

    std::string windowTitle = "Stabilized Video";

    cv::namedWindow(windowTitle, cv::WINDOW_AUTOSIZE);

    while(true)
    {
        cv::Mat frame = stabilizer->nextFrame();

        if(frame.empty())   break;

        cv::imshow(windowTitle,frame);
        cv::waitKey(10);
    }

    return 0;
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM