简体   繁体   中英

Transforming 2D pixel coordinates to 3D Azure Kinect

I am working on detecting a rectangle using the depth camera. At this point, I am struggling to produce accurate 3d coordinates of a selected pixel using the NFOV depth camera. I have tested this by selecting 2 points on a testing board, transforming them using Depth 2d to Color 3d Kinect-SDK C# Wrapper function, calculating the distance between received coordinates, and measuring distance between selected real-world points (918mm). At 2.7 meter distance, I am getting 2 cm error at the image center, while in the corners the error reaches up to 6 cm.

Shouldn't the Transformation functions correct for distortion? Am I missing crucial steps for receiving accurate data? Might this be something else?

Thank you for your help!

Adding response from Git issue:

Couple of points:

  1. How did you choose the 2d points. Is that by manually looking at the IR image or by manually looking at depth image? The point is, you should start with an accurate 2d pixel which matches exactly the 3d point, often you will need some texture when doing this by human eye eg a target board with visible markers in IR spectrum, then you can pixel inspect (assuming human eye can give enough precision) to find the center of the marker or use CV algorithm to detect 2d points.
  2. Try only to transform the 2d depth pixel to the 3d point in depth camera space (instead of color 3d space). You only need to change the last parameter of the TransformTo3D to K4A.CalibrationDeviceType.Depth and then compare the relative distance of point A to point B with the real world measurement. This can help narrow down whether only using depth camera intrinsics can give a better result (instead of going all the way to color space). If you see better results using 3d points from depth camera space comparing to the 3d points from color space, then there might be some calibration issue related to extrinsics or color intrinsics.

Finally the depth camera tilt of 6 degrees relative to the colot camera should not matter. The camera calibration intrinsics is calibrated for each camera's distortion, and the extrinsics calibrated to take into account camera mechanics.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM