简体   繁体   中英

Getting world coordinates from OpenGL depth buffer

i am using pyBullet, which is python wrapper to bullet3 physics engine and i need to create point cloud from virtual camera.
This engine uses basic OpenGL renderer and i am able to get values from OpenGL depth buffer

img = p.getCameraImage(imgW, imgH, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]

Now i have width*height array with depth values. How can i get world coordinates from this? i tried to save .ply point cloud with points (width, height, depthBuffer(width, height)) but this doesn't create point cloud that looks like objects on the scene.

I also tried to correct depth with near far plane:

depthImg = float(depthBuffer[h, w])
far = 1000.
near = 0.01
depth = far * near / (far - (far - near) * depthImg)

but result with this was also some weird point cloud. How can i create realistic point cloud from data from depth buffer? is it even possible?

i did something similar in c++, but there i used glm::unproject

for (size_t i = 0; i < height; i = i = i+density) {
        for (size_t j = 0; j < width; j = j = j+density) {

            glm::vec3 win(i, j, depth);
            glm::vec4 position(glm::unProject(win, identity, projection, viewport), 0.0);

EDIT:

based on Rabbid76 answer i used PyGLM which worked, i am now able to obtain XYZ world coordinates to create point cloud, but depth values in point cloud look distorted, am i getting depth from depth buffer correctly?

    for h in range(0, imgH, stepX):
       for w in range(0, imgW, stepY):
          depthImg = float(np.array(depthBuffer)[h, w])
          far = 1000.
          near = 0.01
          depth = far * near / (far - (far - near) * depthImg)
          win = glm.vec3(h, w, depthBuffer[h][w])
          position = glm.unProject(win, model, projGLM, viewport)
          f.write(str(position[0]) + " " + str(position[1]) + " " + str(depth) + "\n")

Here is my solution. We just need to know how the view Matrix and the projection matrix work. There are computeProjectionMatrixFOV and computeViewMatrix funtions in pybullet. http://www.songho.ca/opengl/gl_projectionmatrix.html and http://ksimek.github.io/2012/08/22/extrinsic/ In a word, point_in_world = inv(projection_matrix * viewMatrix) * NDC_pos

glm.unProject is an another solution

    stepX = 10
    stepY = 10        
    pointCloud = np.empty([np.int(img_height/stepY), np.int(img_width/stepX), 4])
    projectionMatrix = np.asarray(projection_matrix).reshape([4,4],order='F')
    viewMatrix = np.asarray(view_matrix).reshape([4,4],order='F')
    tran_pix_world = np.linalg.inv(np.matmul(projectionMatrix, viewMatrix))
    for h in range(0, img_height, stepY):
        for w in range(0, img_width, stepX):
            x = (2*w - img_width)/img_width
            y = -(2*h - img_height)/img_height  # be careful! deepth and its corresponding position
            z = 2*depth_np_arr[h,w] - 1
            pixPos = np.asarray([x, y, z, 1])
            position = np.matmul(tran_pix_world, pixPos)

            pointCloud[np.int(h/stepY),np.int(w/stepX),:] = position / position[3]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM