简体   繁体   中英

how to map depth frame to color frame WITHOUT Kinect

I am trying to map depth frame to color frame without kinect. I previously acquired the images, using Kinect, and now, based on the depth image where i can clearly see the person body shape, i want to match both, color and depth image, without using kinect method MapDephtFrametoColorFrame (i can't apply this method without using Kinect).

How to do this?

I thought in acquiring the points of depth where they are 255 (thresholded) and then match the [x,y] points to color, but i don't have any results.

Thanks in advance

I previously found an article pointing out how to do that ( here's the link )

Here's the pseudo code from the site (and all the parameters are on the website too):

P3D.x = (x_d - cx_d) * depth(x_d,y_d) / fx_d
P3D.y = (y_d - cy_d) * depth(x_d,y_d) / fy_d
P3D.z = depth(x_d,y_d)

P3D' = R.P3D + T
P2D_rgb.x = (P3D'.x * fx_rgb / P3D'.z) + cx_rgb
P2D_rgb.y = (P3D'.y * fy_rgb / P3D'.z) + cy_rgb

Just like you, I have been trying to map depth to rgb data, however after using this algorithm, I'm still stuck with some misalignments, where the color mapping seems to be shifted to other directions a bit, which varies from image to image in my dataset.

Hope that at least these could help you to the right direction.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM