简体   繁体   中英

Get point cloud from kinect's depth data

I am new to Kinect.. I wanted to know how is it possible to get point cloud from kinect's depth data. I have tried getting the depth pixels and colorizing the near pixels based on depth. Now, my requirement is to get a 3d map based on the depth data. So, I guess first I need to have the point cloud. How should I proceed?

I never used kinect, but given the input is 2D pixels with depth data. You need to take the pixels and Unproject them to world space (assuming your already setup your virtual camera, view and projection matrix). Given that you have the depth for each will give you the actual Z world position. Keep in mind that those 3D points will only be the visible pionts to the kinect sensor.

If you want to convert the 3D point clouds into a 3D mesh, you need to find the convex hull of the points and then triangulate it.

I would see my question about getting a point cloud of my body. Especially davidbates answer, as it describes how exactly to create the effect with depth data. If you want code, I would see this website which I have used before to create point clouds.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM