简体   繁体   中英

covert rgb png and depth txt to point cloud

I have a series of rgb files in png format, as well as the corresponding depth file in txt format, which can be loaded with np.loadtxt . How could I merge these two files to point cloud using open3d ?

I followed the procedure as obtain point cloud from depth numpy array using open3d - python , but the result is not readable for human.

The examples is listed here:

  • the source png: 在此处输入图像描述
  • the pcd result: 在此处输入图像描述

You can get the source file from this link ![google drive] to reproduce my result. By the way, the depth and rgb are not registerd.

Thanks.

I used laspy instead of open3d because wanted to give some colors to your image:

import imageio
import numpy as np
# first reading the image for RGB values
image = imageio.imread(".../a542c.png")
loading the depth file
depth = np.loadtxt("/home/shaig93/Documents/internship_FWF/a542d.txt")
# creating fake x, y coordinates with meshgrid
xv, yv = np.meshgrid(np.arange(400), np.arange(640), indexing='ij')
#  save_las is a function based on laspy that was provided to me by my supervisor
save_las("fn.laz", image[:400, :, 0].flatten(), np.c_[yv.flatten(), xv.flatten(), depth.flatten()], cmap = plt.cm.magma_r)

and the result is this. As you can see objects are visible from front. 在此处输入图像描述

However from side they are not easy to distinguish. 在此处输入图像描述

This means to me to think that your depth file is not that good.

Another idea would be also getting rid off 0 values from your depth file so that you can get point cloud without a wall kind of structure in the front. But still does not solve depth issue of course.

ps. I know this is not a proper answer but I hope it was helpful on identifying the problem.

I had to play a bit with the settings and data and used mainly the answer of your SO link.

import cv2
import numpy as np
import open3d as o3d

color = o3d.io.read_image("a542c.png")
depth = np.loadtxt("a542d.txt")

vertices = []
for x in range(depth.shape[0]):
    for y in range(depth.shape[1]):
        vertices.append((float(x), float(y), depth[x][y]))
pcd = o3d.geometry.PointCloud()
point_cloud = np.asarray(np.array(vertices))
pcd.points = o3d.utility.Vector3dVector(point_cloud)
pcd.estimate_normals()
pcd = pcd.normalize_normals()
o3d.visualization.draw_geometries([pcd])

However, if you keep the code as provided, the whole scene looks very weird and unfamiliar. That is because your depth file contains data between 0 and almost 2.5 m. I introduced a cut-off at 500 or 1000 mm plus removed all 0s as suggested in the other answer. Additionally I flipped the x-axis (float(-x) instead of float(x)) to resemble your photo.

# ...
vertices = []
for x in range(depth.shape[0]):
    for y in range(depth.shape[1]):
        if 0< depth[x][y]<500:
            vertices.append((float(-x), float(y), depth[x][y]))

For a good perspective I had to rotate the images manually. Probably open3d provides methods to do it automatically (I quickly tried pcd.transform() from your SO link above, it can help you if needed). Results

500 mm cut-off: 500mm 切断 and 1000 mm cut-off: 1000mm 切断 .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM