简体   繁体   English

在给定 2D 图像点、深度图和相机校准矩阵的情况下提取 3D 坐标

[英]Extracting 3D coordinates given 2D image points, depth map and camera calibration matrices

I have a set of 2D image keypoints that are outputted from the OpenCV FAST corner detection function.我有一组从OpenCV FAST角点检测功能输出的2D image关键点。 Using an Asus Xtion I also have a time-synchronised depth map with all camera calibration parameters known.使用Asus Xtion I还有一个时间同步的深度图,其中包含所有已知的相机校准参数。 Using this information I would like to extract a set of 3D coordinates (point cloud) in OpenCV.使用这些信息,我想在OpenCV.提取一组3D坐标(点云) OpenCV.

Can anyone give me any pointers regarding how to do so?任何人都可以给我任何关于如何这样做的指示吗? Thanks in advance!提前致谢!

Nicolas Burrus has created a great tutorial for Depth Sensors like Kinect. Nicolas Burrus 为 Kinect 等深度传感器创建了一个很棒的教程。

http://nicolas.burrus.name/index.php/Research/KinectCalibration http://nicolas.burrus.name/index.php/Research/KinectCalibration

I'll copy & paste the most important parts:我将复制并粘贴最重要的部分:

Mapping depth pixels with color pixels用颜色像素映射深度像素

The first step is to undistort rgb and depth images using the estimated distortion coefficients.第一步是使用估计的失真系数对 rgb 和深度图像进行失真处理。 Then, using the depth camera intrinsics, each pixel (x_d,y_d) of the depth camera can be projected to metric 3D space using the following formula:然后,使用深度相机内在函数,可以使用以下公式将深度相机的每个像素 (x_d,y_d) 投影到度量 3D 空间:

 P3D.x = (x_d - cx_d) * depth(x_d,y_d) / fx_d P3D.y = (y_d - cy_d) * depth(x_d,y_d) / fy_d P3D.z = depth(x_d,y_d)

with fx_d, fy_d, cx_d and cy_d the intrinsics of the depth camera. fx_d、fy_d、cx_d 和 cy_d 是深度相机的内在函数。

If you are further interested in stereo mapping (values for kinect):如果您对立体映射(kinect 的值)更感兴趣:

We can then reproject each 3D point on the color image and get its color:然后我们可以在彩色图像上重新投影每个 3D 点并获得它的颜色:

 P3D' = R.P3D + T P2D_rgb.x = (P3D'.x * fx_rgb / P3D'.z) + cx_rgb P2D_rgb.y = (P3D'.y * fy_rgb / P3D'.z) + cy_rgb

with R and T the rotation and translation parameters estimated during the stereo calibration. R 和 T 是立体声校准期间估计的旋转和平移参数。

The parameters I could estimate for my Kinect are:我可以为我的 Kinect 估计的参数是:

Color颜色

fx_rgb 5.2921508098293293e+02
fy_rgb 5.2556393630057437e+02 
cx_rgb 3.2894272028759258e+02 
cy_rgb 2.6748068171871557e+02 
k1_rgb 2.6451622333009589e-01 
k2_rgb -8.3990749424620825e-01 
p1_rgb -1.9922302173693159e-03 
p2_rgb 1.4371995932897616e-03 
k3_rgb 9.1192465078713847e-01

Depth深度

fx_d 5.9421434211923247e+02
fy_d 5.9104053696870778e+02 
cx_d 3.3930780975300314e+02 
cy_d 2.4273913761751615e+02 
k1_d -2.6386489753128833e-01 
k2_d 9.9966832163729757e-01 
p1_d -7.6275862143610667e-04 
p2_d 5.0350940090814270e-03 
k3_d -1.3053628089976321e+00

Relative transform between the sensors (in meters)传感器之间的相对变换(以米为单位)

 R [ 9.9984628826577793e-01, 1.2635359098409581e-03, -1.7487233004436643e-02,
-1.4779096108364480e-03, 9.9992385683542895e-01, -1.2251380107679535e-02,
 1.7470421412464927e-02, 1.2275341476520762e-02, 9.9977202419716948e-01 ] T [ 1.9985242312092553e-02, -7.4423738761617583e-04, -1.0916736334336222e-02 ]

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM