简体   繁体   中英

How to find the focal length from camera matrix?

I have OpenCV code for calculating camera matrix and fixing distortion in image.

Here is the part of the code in OpenCV and C++.

//default capture width and height
const int FRAME_WIDTH = 1288;
const int FRAME_HEIGHT = 964;
//max number of objects to be detected in frame
const int MAX_NUM_OBJECTS=50;
//minimum and maximum object area
const int MIN_OBJECT_AREA = 2*2;
const int MAX_OBJECT_AREA = FRAME_HEIGHT*FRAME_WIDTH/1.5;

Mat DistortedImg;                                           //storage for copy of the image raw
Mat UndistortedImg;                                         //

double cameraM[3][3] = {{1103.732864, 0.000000, 675.056365}, {0.000000, 1100.058630, 497.063376}, {0, 0, 1}}; //camera matrix to be edited
Mat CameraMatrix = Mat(3, 3, CV_64FC1, cameraM);

double distortionC[5] = {-0.346476, 0.142352, -0.000084, -0.001727, 0.000000};              //distortioncoefficient to be edited
Mat DistortionCoef = Mat(1, 5, CV_64FC1, distortionC);                          

double rArray[3][3] = {{1, 0, 0}, {0, 1, 0}, {0, 0, 1}};
Mat RArray = Mat(3, 3, CV_64FC1, rArray);                   //originally CV_64F

double newCameraM[3][3] = {{963.436584, 0.000000, 680.157832}, {0.000000, 1021.688843, 498.825528}, {0, 0, 1}};
Mat NewCameraMatrix = Mat(3, 3, CV_64FC1, newCameraM);
Size UndistortedSize(1288,964);

Mat map1;
Mat map2;       

string intToString(int number)
{
    std::stringstream ss;
    ss << number;
    return ss.str();
}

void imageCb(const sensor_msgs::ImageConstPtr& msg)                               //callback function defination
{
   cv_bridge::CvImagePtr cv_ptr;                                                       
   try
   {
        cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8);              //convert ROS image to CV image and make copy of it storing in cv_ptr(a pointer)
   }
   catch (cv_bridge::Exception& e)
   {
      ROS_ERROR("cv_bridge exception: %s", e.what());
      return;
   }

    /*  image working procedure starting from here inside the main function.
     *  The purpose of the image processing is to use the existing video to working out the 
     *  cordinate of the detected object, using color extraction technique.
     */

    bool trackObjects = true;
    bool useMorphOps = true;

    Mat cameraFeed; 
    Mat HSV;    
    Mat threshold;
    Mat ideal_image;

    //x and y values for the location of the object
    int x=0, y=0;
    createTrackbars();

    //store image to matrix
    cv_ptr->image.copyTo(DistortedImg);                                         //=Tan= copy the image from ardrone to DistortedImg for processing
    initUndistortRectifyMap(CameraMatrix, DistortionCoef, RArray, NewCameraMatrix, UndistortedSize, CV_32FC1, map1, map2);
    remap(DistortedImg, cameraFeed, map1, map2, INTER_LINEAR, BORDER_CONSTANT, Scalar(0,0,0));      

    cvtColor(cameraFeed,HSV,COLOR_BGR2HSV);                                  //convert frame from BGR to HSV colorspace

    //output the after-threshold matrix to Mat threshold
    inRange(HSV,Scalar(iLowH_1, iLowS_1, iLowV_1),Scalar(iHighH_1, iHighS_1, iHighV_1),threshold);      
    //inRange(HSV,Scalar(0, 87, 24),Scalar(9, 255, 255),threshold);  //red

    morphOps(threshold);
    GaussianBlur( threshold, ideal_image, Size(9, 9), 2, 2 );

    trackFilteredObject1(x,y,ideal_image,cameraFeed);


    namedWindow( "Image with deal1", 0 );
    namedWindow( "Original Image", 0 );

    imshow("Image with deal1",ideal_image);
    imshow("Original Image", cameraFeed);

    //delay 30ms so that screen can refresh.
    //image will not appear without this waitKey() command
    cv::waitKey(30);
}

Im not sure how to use this code in order to find the focal length from camera matrix. This code should calculate camera matrix and from the need to find the focal lenght. But some how Im not sure this is the way to get the camera matrix and then the focal lenght. The camera matrix 3x3 matrix. But how those parameters are calculated?

Any help?

First a little bit about the camera matrix:

The camera matrix is of the following form:

f_x  s    c_x
0    f_y  c_y
0    0    1

where f_x is the camera focal length in the x axis in pixels

f_y is the camera focal length in the y axis in pixels

s is a skew parameter (normally not used)

c_x is the optical center in x

c_y is the optical center in y

Normally f_x and f_y are identical but it is possible to be different. In this link you can get even more information about it.

Now lets go back to your code.

In your code the camera matrix is hardcoded, not calculated!, specifically in here:

double cameraM[3][3] = {{1103.732864, 0.000000, 675.056365}, {0.000000, 1100.058630, 497.063376}, {0, 0, 1}}; //camera matrix to be edited
Mat CameraMatrix = Mat(3, 3, CV_64FC1, cameraM);

And in any part of your code there is anything about calculating it.

The calibration of a camera has several steps:

  1. Get images of chessboard like pattern

  2. find the intersections of the squares in this images ( findChessBoardCorners

  3. then use the CalibrateCamera function to get the matrix and other information

More information about it here .

Once you get the camera matrix, you can get the focal length, and if you want it in millimeters, you will need the size of the sensors of the camera (you have to ask the manufacturer or find it in the internet)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM