简体   繁体   中英

Translate and rotate a 3d point to origin with pcl transformcloud

Given a rotation matrix and translation matrix obtained from camera pose estimation of a corner of a charucoBoard, I want to transform one particular corner to the origin by using homogeneous transformation. If I understand correctly, the position of the camera is (0,0,0) in camera's world, and the rotation matrix and translation vector are also interpreted in camera's coordinate system.

translation vector tells you where exactly is the point in camera's coordinate system, whereas the rotation matrix tells you the rotation angle with respect to the camera.

I then construct my homogeneous transformation matrix as:

    float transformation_matrix[] = {rmatrix.at<float>(0,0),rmatrix.at<float>(0,1), rmatrix.at<float>(0,2), tvec[0],rmatrix.at<float>(1,0), rmatrix.at<float>(1,1), rmatrix.at<float>(1,2), tvec[1], rmatrix.at<float>(2,0), rmatrix.at<float>(2,1), rmatrix.at<float>(2,2), tvec[2],0.f ,0.f ,0.f, 1.f};

where the rmatrix is my rotation matrix and tvec is the translation vector

and the output of transformation_matrix is

    transformation matrix        = 
     [-0.99144238, 0.12881841, 0.021162758, 0.18116128;
     0.045751289, 0.49469706, -0.86786038, -0.037676446;
     -0.12226554, -0.8594653, -0.49635723, 0.67637974;
     0, 0, 0, 1]

and the rotation matrix is:

    rodrigues matrix              = 
     [-0.99144238, 0.12881841, 0.021162758;
     0.045751289, 0.49469706, -0.86786038;
     -0.12226554, -0.8594653, -0.49635723]

translation vector:

    [0.181161, -0.0376764, 0.67638]

I then use transformpointcloud from pcl::transformPointCloud (*cloud, *transformed_cloud, translation_matrix);

However, I cannot align the point to origin after transformation. Does anyone have an insight on where I might get wrong?

    aruco::drawAxis(imageCopy, Camera_matrix, distortion, rvec, tvec, 0.1);
    cv::Rodrigues(rvec, rmatrix);
    pcl::PointCloud<pcl::PointXYZ>::Ptr transformed_cloud (new pcl::PointCloud<pcl::PointXYZ> ());
    float transformation_matrix[] = {rmatrix.at<float>(0,0),rmatrix.at<float>(0,1), rmatrix.at<float>(0,2),         tvec[0],rmatrix.at<float>(1,0), rmatrix.at<float>(1,1), rmatrix.at<float>(1,2), tvec[1], rmatrix.at<float>(2,0), rmatrix.at<float>(2,1), rmatrix.at<float>(2,2), tvec[2],0.f ,0.f ,0.f, 1.f};
    cv::Mat translation_m = cv::Mat(4,4,CV_32F, transformation_matrix);
    Eigen::Matrix<float,Eigen::Dynamic,Eigen::Dynamic> translation_matrix;
    cv::cv2eigen(translation_m, translation_matrix);           
    pcl::transformPointCloud (*cloud, *transformed_cloud, translation_matrix);

UPDATE:


I tested out the transformation before and after and found out my transformation with homogeneous matrix was incorrect:

Here are the depth data taken from the side of the board As you can see, the x, y, z axis are displayed as red, green and blue respectively and the white point cloud is before whereas the red point cloud is after. I found out that the image is flipped correctly to the direction I want but the origin (the intersection of red, green and blue axis) is sometimes not attached to one of the corner of the board, and the x axis is not perfectly aligned with the border of the board.

For sure one thing I can confirm is that when the camera is parallel to the board, the x axis is parallel to the transformed image as well. taken from the upright angle at the top of the board

I am speculating that I am missing one or two axis of rotation.

I made sure my rotation matrix and translation vector of the corner are correct by labeling another corner with the following code using the R*P + T formula:

    aruco::drawAxis(imageCopy, Camera_matrix, distortion, rvec, tvec, 0.1);

         cv::Rodrigues(rvec, rmatrix);
        Vec3f corner1_0(0.4, 0, 0);


        Mat matcorner1_0;
        matcorner1_0 = FloatMatFromVec3f(corner1_0);


        auto mat_tvec = FloatMatFromVec3f(tvec);


        Mat3f rcorner1_0 = rmatrix * matcorner1_0 + mat_tvec;

        float cornerdata[3] = {rcorner1_0.at<float>(0,0),rcorner1_0.at<float>(0,1),rcorner1_0.at<float>(0,2)};
        cv::Mat rcorner_10 = cv::Mat(3,1, CV_32F, cornerdata);

        auto r1_0 = to_vec3f(rcorner_10);

        aruco::drawAxis(imageCopy, Camera_matrix, distortion, rvec, r1_0, 0.1);

The length of the board is 0.4 meter so i set corner1_0 to (0.4, 0, 0) and then this is the image of the two marker axis I draw for indication purpose

As you can see the rotation matrix and translation vector are correct. Any help/hint/speculation is greatly appreciated:)

I solved the problem. The problem is that the homogeneous transformation matrix is transforming from camera's coordinate system to that of real world. However, I want the transformation matrix from real world coordinate system back to camera's coordinate system.

You can achieve this by taking the inverse of the transformation matrix.

so adding translation_m= translation_m.inv(); between

    cv::Mat translation_m = cv::Mat(4,4,CV_32F, transformation_matrix);

and

    Eigen::Matrix<float,Eigen::Dynamic,Eigen::Dynamic> translation_matrix;
    cv::cv2eigen(translation_m, translation_matrix);

will give you the correct answer.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM