简体   繁体   English

从 OpenGL 深度缓冲区获取世界坐标

[英]Getting world coordinates from OpenGL depth buffer

i am using pyBullet, which is python wrapper to bullet3 physics engine and i need to create point cloud from virtual camera.我正在使用pyBullet,它是bullet3 物理引擎的python 包装器,我需要从虚拟相机创建点云。
This engine uses basic OpenGL renderer and i am able to get values from OpenGL depth buffer该引擎使用基本的 OpenGL 渲染器,我可以从 OpenGL 深度缓冲区中获取值

img = p.getCameraImage(imgW, imgH, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]

Now i have width*height array with depth values.现在我有深度值的宽度*高度数组。 How can i get world coordinates from this?我如何从中获得世界坐标? i tried to save .ply point cloud with points (width, height, depthBuffer(width, height)) but this doesn't create point cloud that looks like objects on the scene.我试图用点(宽度、高度、深度缓冲区(宽度、高度))保存 .ply 点云,但这不会创建看起来像场景中对象的点云。

I also tried to correct depth with near far plane:我还尝试用近远平面校正深度:

depthImg = float(depthBuffer[h, w])
far = 1000.
near = 0.01
depth = far * near / (far - (far - near) * depthImg)

but result with this was also some weird point cloud.但结果也是一些奇怪的点云。 How can i create realistic point cloud from data from depth buffer?如何从深度缓冲区的数据创建逼真的点云? is it even possible?甚至有可能吗?

i did something similar in c++, but there i used glm::unproject我在 C++ 中做了类似的事情,但我使用了 glm::unproject

for (size_t i = 0; i < height; i = i = i+density) {
        for (size_t j = 0; j < width; j = j = j+density) {

            glm::vec3 win(i, j, depth);
            glm::vec4 position(glm::unProject(win, identity, projection, viewport), 0.0);

EDIT:编辑:

based on Rabbid76 answer i used PyGLM which worked, i am now able to obtain XYZ world coordinates to create point cloud, but depth values in point cloud look distorted, am i getting depth from depth buffer correctly?基于 Rabbid76 的答案,我使用了有效的 PyGLM,我现在能够获得 XYZ 世界坐标来创建点云,但是点云中的深度值看起来失真,我是否正确地从深度缓冲区获取深度?

    for h in range(0, imgH, stepX):
       for w in range(0, imgW, stepY):
          depthImg = float(np.array(depthBuffer)[h, w])
          far = 1000.
          near = 0.01
          depth = far * near / (far - (far - near) * depthImg)
          win = glm.vec3(h, w, depthBuffer[h][w])
          position = glm.unProject(win, model, projGLM, viewport)
          f.write(str(position[0]) + " " + str(position[1]) + " " + str(depth) + "\n")

Here is my solution.这是我的解决方案。 We just need to know how the view Matrix and the projection matrix work.我们只需要知道视图矩阵和投影矩阵是如何工作的。 There are computeProjectionMatrixFOV and computeViewMatrix funtions in pybullet. pybullet 中有computeProjectionMatrixFOV 和computeViewMatrix 函数。 http://www.songho.ca/opengl/gl_projectionmatrix.html and http://ksimek.github.io/2012/08/22/extrinsic/ In a word, point_in_world = inv(projection_matrix * viewMatrix) * NDC_pos http://www.songho.ca/opengl/gl_projectionmatrix.htmlhttp://ksimek.github.io/2012/08/22/extrinsic/总之,point_in_world = inv(projection_matrix * viewMatrix) * NDC_pos

glm.unProject is an another solution glm.unProject 是另一种解决方案

    stepX = 10
    stepY = 10        
    pointCloud = np.empty([np.int(img_height/stepY), np.int(img_width/stepX), 4])
    projectionMatrix = np.asarray(projection_matrix).reshape([4,4],order='F')
    viewMatrix = np.asarray(view_matrix).reshape([4,4],order='F')
    tran_pix_world = np.linalg.inv(np.matmul(projectionMatrix, viewMatrix))
    for h in range(0, img_height, stepY):
        for w in range(0, img_width, stepX):
            x = (2*w - img_width)/img_width
            y = -(2*h - img_height)/img_height  # be careful! deepth and its corresponding position
            z = 2*depth_np_arr[h,w] - 1
            pixPos = np.asarray([x, y, z, 1])
            position = np.matmul(tran_pix_world, pixPos)

            pointCloud[np.int(h/stepY),np.int(w/stepX),:] = position / position[3]

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM