I am working on detecting a rectangle using the depth camera. At this point, I am struggling to produce accurate 3d coordinates of a selected pixel using the NFOV depth camera. I have tested this by selecting 2 points on a testing board, transforming them using Depth 2d to Color 3d Kinect-SDK C# Wrapper function, calculating the distance between received coordinates, and measuring distance between selected real-world points (918mm). At 2.7 meter distance, I am getting 2 cm error at the image center, while in the corners the error reaches up to 6 cm.
Shouldn't the Transformation functions correct for distortion? Am I missing crucial steps for receiving accurate data? Might this be something else?
Thank you for your help!
Adding response from Git issue:
Couple of points:
Finally the depth camera tilt of 6 degrees relative to the colot camera should not matter. The camera calibration intrinsics is calibrated for each camera's distortion, and the extrinsics calibrated to take into account camera mechanics.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.