[英]How to calibrate camera focal length, translation and rotation given four points?
[英]How to find the focal length from camera matrix?
我有OpenCV代碼用於計算相機矩陣和修復圖像中的失真。
這是OpenCV和C ++中代碼的一部分。
//default capture width and height
const int FRAME_WIDTH = 1288;
const int FRAME_HEIGHT = 964;
//max number of objects to be detected in frame
const int MAX_NUM_OBJECTS=50;
//minimum and maximum object area
const int MIN_OBJECT_AREA = 2*2;
const int MAX_OBJECT_AREA = FRAME_HEIGHT*FRAME_WIDTH/1.5;
Mat DistortedImg; //storage for copy of the image raw
Mat UndistortedImg; //
double cameraM[3][3] = {{1103.732864, 0.000000, 675.056365}, {0.000000, 1100.058630, 497.063376}, {0, 0, 1}}; //camera matrix to be edited
Mat CameraMatrix = Mat(3, 3, CV_64FC1, cameraM);
double distortionC[5] = {-0.346476, 0.142352, -0.000084, -0.001727, 0.000000}; //distortioncoefficient to be edited
Mat DistortionCoef = Mat(1, 5, CV_64FC1, distortionC);
double rArray[3][3] = {{1, 0, 0}, {0, 1, 0}, {0, 0, 1}};
Mat RArray = Mat(3, 3, CV_64FC1, rArray); //originally CV_64F
double newCameraM[3][3] = {{963.436584, 0.000000, 680.157832}, {0.000000, 1021.688843, 498.825528}, {0, 0, 1}};
Mat NewCameraMatrix = Mat(3, 3, CV_64FC1, newCameraM);
Size UndistortedSize(1288,964);
Mat map1;
Mat map2;
string intToString(int number)
{
std::stringstream ss;
ss << number;
return ss.str();
}
void imageCb(const sensor_msgs::ImageConstPtr& msg) //callback function defination
{
cv_bridge::CvImagePtr cv_ptr;
try
{
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); //convert ROS image to CV image and make copy of it storing in cv_ptr(a pointer)
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
/* image working procedure starting from here inside the main function.
* The purpose of the image processing is to use the existing video to working out the
* cordinate of the detected object, using color extraction technique.
*/
bool trackObjects = true;
bool useMorphOps = true;
Mat cameraFeed;
Mat HSV;
Mat threshold;
Mat ideal_image;
//x and y values for the location of the object
int x=0, y=0;
createTrackbars();
//store image to matrix
cv_ptr->image.copyTo(DistortedImg); //=Tan= copy the image from ardrone to DistortedImg for processing
initUndistortRectifyMap(CameraMatrix, DistortionCoef, RArray, NewCameraMatrix, UndistortedSize, CV_32FC1, map1, map2);
remap(DistortedImg, cameraFeed, map1, map2, INTER_LINEAR, BORDER_CONSTANT, Scalar(0,0,0));
cvtColor(cameraFeed,HSV,COLOR_BGR2HSV); //convert frame from BGR to HSV colorspace
//output the after-threshold matrix to Mat threshold
inRange(HSV,Scalar(iLowH_1, iLowS_1, iLowV_1),Scalar(iHighH_1, iHighS_1, iHighV_1),threshold);
//inRange(HSV,Scalar(0, 87, 24),Scalar(9, 255, 255),threshold); //red
morphOps(threshold);
GaussianBlur( threshold, ideal_image, Size(9, 9), 2, 2 );
trackFilteredObject1(x,y,ideal_image,cameraFeed);
namedWindow( "Image with deal1", 0 );
namedWindow( "Original Image", 0 );
imshow("Image with deal1",ideal_image);
imshow("Original Image", cameraFeed);
//delay 30ms so that screen can refresh.
//image will not appear without this waitKey() command
cv::waitKey(30);
}
我不知道如何使用此代碼來從相機矩陣中找到焦距。 這段代碼應該計算相機矩陣並從需要找到焦點長度。 但有些人如何不確定這是獲得相機矩陣然后焦點長度的方法。 相機矩陣3x3矩陣。 但是如何計算這些參數?
有幫助嗎?
首先談談相機矩陣:
相機矩陣具有以下形式:
f_x s c_x
0 f_y c_y
0 0 1
其中f_x
是x軸上的相機焦距,以像素為單位
f_y
是y軸上的相機焦距,以像素為單位
s
是一個偏斜參數(通常不使用)
c_x
是x中的光學中心
c_y
是y中的光學中心
通常f_x和f_y相同但可能不同。 在此鏈接中,您可以獲得有關它的更多信息。
現在讓我們回到你的代碼。
在您的代碼中,相機矩陣是硬編碼的,而不是計算!,具體在這里:
double cameraM[3][3] = {{1103.732864, 0.000000, 675.056365}, {0.000000, 1100.058630, 497.063376}, {0, 0, 1}}; //camera matrix to be edited
Mat CameraMatrix = Mat(3, 3, CV_64FC1, cameraM);
在代碼的任何部分,都有關於計算它的任何信息。
相機的校准有幾個步驟:
獲得像棋盤一樣的棋盤圖像
找到此圖像中正方形的交點( findChessBoardCorners
然后使用CalibrateCamera函數獲取矩陣和其他信息
關於它的更多信息在這里 。
獲得相機矩陣后,您可以獲得焦距,如果您想要以毫米為單位,則需要相機傳感器的尺寸(您必須詢問制造商或在互聯網上找到它)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.