简体   繁体   中英

Create synthetic LiDAR point clouds

I'd like to create synthetic training data for DL models for segmentation and classification in point clouds. The ground truth / real data comprise LiDAR point clouds. I scripted a simple mesh sampling model in python/open3d and I'm able to quickly transfer 3D scenes to point clouds (see fig 1), but I need to include certain characteristics of LiDAR sensors.

在 open3d 中网格到点云

Blensor ( https://www.blensor.org/ ) works the way I need it (fig 2), but I don't want to use blender atm. Also the results don't have a sufficient quality for my use case.

在此处输入图片说明

In the first step I'd just like to cut off the points, which are not reachable by a certain position of a LiDAR sensor, mainly to create the "shadows", which are important to make the training data more realistic. Do you have any suggestions for a simple and fast workaround? My point cloud is saved in a pandas dataframe including x,y,z and nx,ny,nz values.

Thx in advance, reiti

If your 3D scene can be described in the form of distance functions (essentially consisting of a range of simple geometric shapes opposed to point cloud data) you may be good to go with an easily modified ray tracing algortihm that emulates a lidar sensor.

For each lidar "ray" (ie for every direction) you only need to save the first scene collision's xyz coordinates. This also gives you full freedom to match the original real world sensor properties (like angles and number of points).

How easy the calculation of the distance between scene and sensor-ray will be, depends on the scene you have set up and how it is represented. Sorry for not being able to provide you with a ready to use implemntation but this might give you some direction.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM