简体   繁体   中英

How to generate 3D point cloud from Lidar range data

I have a set of videos that were captured by a lidar. My data is raw, meaning that each video file contains range and intensity (gray scale) data. Now, I want to create 3D point cloud from range data. Based on what I read, the Lidar data that I have looks pretty much the same as Kinect data (depth + intensity). But, while there are codes and equations that let you convert Kinect depth to 3D point cloud, I haven't found any such equation for the Lidar data. I hope someone could help me with an equation or a sample code (preferably in Matlab) that does the conversion from Lidar range data to 3D point cloud.

Edit: The videos that I have contain human targets both indoors and outdoors. Unfortunately, I cannot share any data. The lidar camera that was used for video recording is TigerCub 3D flash Lidar. I don't have any access to the camera, only have the data. Also, I checked the manual of the camera, but couldn't find any information that would be helpful. Just like Kinect, I thought there must be a relation between range (depth) data and 3D point cloud, and all I need is such an equation to help me generate 3D point cloud.

This might be useful for another person who has the same question. Basically, 3D flash Lidar camera operate (even look) pretty much like the 2D digital camera. The focal plane array of a 3D flash lidar camera has pixels in rows and columns, but it also captures a 3rd dimension data which is depth or range of the object, where the latter is called range data. This means a 3D flash Lidar camera generates intensity as well as range data.

The similarity between 3D flash lidar and 2D digital camera let us apply the same pinhole camera analogy for the 2D digital camera, to 3D flash lidar cameras. So, the 3D point cloud from Lidar range data can be calculated by the following equations:

x(pointCloud)= x(image)*range/f 
y(pointCloud)= y(image)*range/f 
z(pointcloud)= range

You just need to multiply by a constant to get the data in the unit you want (depending on the unit of f which is focal length of the camera). x,y(image), and range are the x, y and range at each point (extracted from the range data) and x,y, and z(pointCloud) are the corresponding coordinates in the point cloud. For a through explanation see this paper .

I think you can refer to this tool https://github.com/PRBonn/semantic-kitti-api The visualize.py in it performs range conversion on the original xyz data obtained by KITTI. You can refer to this to change your parameters. You can convert xyz to range, and range data to xyz data.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM