简体   繁体   中英

3D point cloud segmentation using 2D information

I have the following problem: I have a neural network that detects bounding boxes of objects that I'm looking for on RGB images. The neural network returns the bottom left and top right coordinates of the bounding box of a detected object as well as the segmentation mask. I want to use this information to distinguish the detected object in the point cloud using pcl library. I wrote following code, but the size of the out_pointcloud seems to be empty.

           pcl::Vertices x_box_vertices;
           x_box_vertices.vertices.push_back(x_bottom_left_vertice);
           x_box_vertices.vertices.push_back(x_top_right_vertice);

           pcl::Vertices y_box_vertices;
           y_box_vertices.vertices.push_back(y_bottom_left_vertice);
           y_box_vertices.vertices.push_back(y_top_right_vertice);

           std::cout << "Bottom left: " << x_bottom_left_vertice << ", " << y_bottom_left_vertice << std::endl;
           std::cout << "Top right: " << x_top_right_vertice<< ", " << y_top_right_vertice << std::endl;

           std::vector<pcl::Vertices> box_vertices{x_box_vertices, y_box_vertices};

           pcl::CropHull<pcl::PointXYZRGB> cropFrontHull;
           cropFrontHull.setHullIndices(box_vertices);
           pcl::PointCloud<pcl::PointXYZRGB>::Ptr p_cloud(new pcl::PointCloud<pcl::PointXYZRGB> (cloud));
           cropFrontHull.setHullCloud(p_cloud);
           std::cout << p_cloud->size() << std::endl;

           pcl::PointCloud<pcl::PointXYZRGB> out_pointcloud;
           cropFrontHull.filter(out_pointcloud);
           std::cout << out_pointcloud.size() << std::endl;

The cloud variable is pcl::PointCloud<pcl::PointXYZRGB> cloud and it stores point cloud from which I want to distinguish my detected object.

It is not clear what sensor you use from the question. But what you need is a way to calibrate the depth sensor to be in the same frame with the RGB sensor .

Here are examples if you're using kinect2 https://github.com/code-iai/iai_kinect2/tree/master/kinect2_calibration please see the difference between "Uncalibrated depth cloud" and "Calibrated depth cloud" in the repository.

If you are able to achieve this, the XY in the RGB image will be the same with XY in depth image, and you won't have issue for getting the correct pointcloud.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM