简体   繁体   English

在OpenCV C ++中将3D点集投影到虚拟图像平面中

[英]Projection of set of 3D points into virtual image plane in opencv c++

Anyone know how to project set of 3D points into virtual image plane in opencv c++ 任何人都知道如何在OpenCV C ++中将3D点集投影到虚拟图像平面中
Thank you 谢谢

首先,您需要定义转换矩阵(旋转,平移等),以将3D空间映射到2D虚拟图像平面,然后将3D点坐标(x,y,z)乘以矩阵以获取2D坐标在图像中。

registration (OpenNI 2) or alternative viewPoint capability (openNI 1.5) indeed help to align depth with rgb using a single line of code. 注册(OpenNI 2)或其他viewPoint功能(openNI 1.5)确实有助于使用单行代码将深度与rgb对齐。 The price you pay is that you cannot really restore exact X, Y point locations in 3D space since the row and col are moved after alignment. 所付出的代价是,您无法真正恢复3D空间中的确切X,Y点位置,因为行和列在对齐后会移动。

Sometimes you need not only Z but also X, Y and want them to be exact; 有时,您不仅需要Z,还需要X,Y,并希望它们精确无误。 plus you want the alignment of depth and rgb. 另外,您还希望深度和rgb对齐。 Then you have to align rgb to depth. 然后,您必须将rgb对准深度。 Note that this alignment is not supported by Kinect/OpenNI. 请注意,Kinect / OpenNI不支持此对齐方式。 The price you pay for this - there is no RGB values in the locations where depth is undefined. 您为此支付的价格-在未定义深度的位置没有RGB值。

If one knows extrinsic parameters that is rotation and translation of the depth camera relative to color one then alignment is just a matter of making an alternative viewpoint: restore 3D from depth, and then look at your point cloud from the point of view of a color camera: that is apply inverse rotation and translation. 如果知道外部参数(即深度相机相对于颜色的旋转和平移),则对齐只是另一种观点的问题:从深度还原3D,然后从颜色的角度查看点云摄像头:即反向旋转和平移。 For example, moving camera to the right is like moving the world (points) to the left. 例如,向右移动相机就像向左移动世界(点)。 Reproject 3D into 2D and interpolate if needed. 将3D重新投影为2D,并在需要时进行插值。 This is really easy and is just an inverse of 3d reconstruction; 这确实很容易,并且只是3D重建的逆过程。 below, Cx is close to w/2 and Cy to h/2; 在下面,Cx接近w / 2,Cy接近h / 2;

col = focal*X/Z+Cx row = -focal*Y/Z+Cy // this is because row in the image increases downward col = focus * X / Z + Cx row = -focal * Y / Z + Cy //这是因为图像中的行向下增加

A proper but also more expensive way to get a nice depth map after point cloud rotation is to trace rays from each pixel till it intersects the point cloud or come sufficiently close to one of the points. 在点云旋转后获得良好深度图的一种合适但也更昂贵的方法是跟踪每个像素的光线,直到它与点云相交或足够接近其中一个点为止。 In this way you will have less holes in your depth map due to sampling artifacts. 这样,由于采样伪像,您的深度图中的孔将更少。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM