简体   繁体   中英

range 3D map visualization (with vtk?)

we have a ZMap (aka depth-map) picture obtained from 3D triangulation with laser and camera. We know the depth value of each pixel and the resolution of camera (each pixel is associated to a 3d coordinate in mm). Our goal is to visualize the Zmap as a 2d Surface, so we thought of creating a point cloud, generate a mesh and display it with some 3D library We thought vtk could be the right choice, but we encountered some problem.

First we tried unorganized structure (vtkPolyData), generating meshes by 3dDalaunay triangulation. But the code is properly working when number of points is < 50k. Our 3D reconstruction is composed by 480k points and the computation time is really too much high.

Then we switched to organized (point with connections). IMHO this additional information should reduce computation time to create mesh, but we are not able to understand how to create a "vtkStructuredGrid" and to feed it with our Z values to get a 2D meshed surface.

Is it the proper way to do? Has anyone never tried this?

Thanks in advance

if your points are organized in 2D grid (while scanning) then you do not need complicated and slow triangulation algorithms

  1. organize points into 2D table/grid

    • something like this:
    • 点格
    • the grey squares are the points
    • so point x,y axises are parallel to table/grid u,v indexes
    • if your data is not yet organized in this manner then sort the points so it does
    • store in something like: double pnt[maxU][maxV][3];
    • in case x,y are directly aligned to grid then you need to store just the Z coordinate to spare some memory
  2. segmentate

    • in 3D scanned pointcloud organized like this is easy
    • just join together all neighboring points with Z coordinate difference smaller then treshold
    • the colored point squares in my image
    • the gray points are the background out of range Z coordinates now
    • so add structure like: int obj[maxU][maxV];
    • set all background/out of range points as obj[u][v]=-1;
    • set the rest with unique obj index like (u*maxV+v)
    • now process each line and if points near have near Z coordinate reindex one of them (so object is growing)
    • when done process all lines and try to merge reindex adjacen objects
    • loop until no merge ocur
    • this is way faster then flood-fill based segmentation (if used proper speed up structures for the line merging)
  3. triangulate

    • process the tab quad by quad
    • if all 4 points belong to same object from segmentation add quad to mesh
    • if just 3 add the triangle (4 combinations)
    • if 1 or 2 do not add anything
    • the result is the colored areas

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM