[英]Changing a C/++ OpenCV program to CUDA from video stabilization program
我正在做一个C ++视频稳定/防抖程序,该程序:-在参考系上获取兴趣点(使用FAST,SURF,Shi-Matoshi或SIFT,可能再尝试一些)-计算Lucas-Kanade光学流量与calcOpticalFlowPyrLK一起使用-获取单应性矩阵-使用warPerspective校正抖动图像(请参见下面的代码)
//Calculate the Lucas Kanade optical flow
calcOpticalFlowPyrLK(original, distorted, refFeatures, currFeatures, featuresFound, err);
//Find the homography between the current frame's features and the reference ones's
if(homographyRansac){
homography = findHomography(currFeatures, refFeatures, CV_RANSAC); /*CV_RANSAC: Random sample consensus (RANSAC) is an iterative method to
estimate parameters of a mathematical model from a set of observed data which contains outliers */
}else{
homography = findHomography(currFeatures, refFeatures, 0);
}
//We use warpPerspective once on the distorted image to get the resulting fixed image
if(multiChannel){
//Spliting into channels
vector <Mat> rgbChannels(channels), fixedChannels;
split(distortedCopy, rgbChannels);
recovered = Mat(reSized, CV_8UC3);
//We apply the transformation to each channel
for(int i = 0; i < channels; i ++){
Mat tmp;
warpPerspective(rgbChannels[i], tmp, homography, reSized);
fixedChannels.push_back(tmp);
}
//Merge the result to obtain a 3 channel corrected image
merge(fixedChannels, recovered);
}else{
warpPerspective(distorted, recovered, homography, reSized);
}
如果您有除我的稳定解决方案以外的任何选择,请随意说,但这不是本主题的主题。
由于所有这些操作都花费大量时间(在我的i5计算机上,每帧大约需要300ms,因此对于30分钟的视频来说,这是很长的时间),因此我正在考虑使用CUDA来加快速度。 我已经安装了它并使其正常工作,但是我不确定下一步如何进行。 我已经进行了一些测试,并且我知道最耗时的操作是分别使用calcOpticalFlowPyrLK和warpPerspective获得光流和帧校正。 因此,理想情况下,至少在一开始,我只会使用这两个功能的CUDA版本,其余的保持不变。
这可能吗? 还是我需要重写所有内容?
谢谢
从OpenCV 3.0开始,可以使用CUDA实现视频稳定。 建议使用已经可用的实现,而不是编写自己的实现,除非您确定版本会更好或更快速。
这是最少的代码,演示如何使用OpenCV视频稳定模块来稳定视频。
#include <opencv2/highgui.hpp>
#include <opencv2/videostab.hpp>
using namespace cv::videostab;
int main()
{
std::string videoFile = "shaky_video.mp4";
MotionModel model = cv::videostab::MM_TRANSLATION; //Type of motion to compensate
bool use_gpu = true; //Select CUDA version or "regular" version
cv::Ptr<VideoFileSource> video = cv::makePtr<VideoFileSource>(videoFile,true);
cv::Ptr<OnePassStabilizer> stabilizer = cv::makePtr<OnePassStabilizer>();
cv::Ptr<MotionEstimatorBase> MotionEstimator = cv::makePtr<MotionEstimatorRansacL2>(model);
cv::Ptr<ImageMotionEstimatorBase> ImageMotionEstimator;
if (use_gpu)
ImageMotionEstimator = cv::makePtr<KeypointBasedMotionEstimatorGpu>(MotionEstimator);
else
ImageMotionEstimator = cv::makePtr<KeypointBasedMotionEstimator>(MotionEstimator);
stabilizer->setFrameSource(video);
stabilizer->setMotionEstimator(ImageMotionEstimator);
stabilizer->setLog(cv::makePtr<cv::videostab::NullLog>()); //Disable internal prints
std::string windowTitle = "Stabilized Video";
cv::namedWindow(windowTitle, cv::WINDOW_AUTOSIZE);
while(true)
{
cv::Mat frame = stabilizer->nextFrame();
if(frame.empty()) break;
cv::imshow(windowTitle,frame);
cv::waitKey(10);
}
return 0;
}
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.