簡體   English   中英

使用 RealSense 進行后處理后如何獲得像素 x,y 處的深度?

[英]How to get the depth at pixel x,y after post processing with RealSense?

考慮以下代碼:

// Declare pointcloud object, for calculating pointclouds and texture mappings
rs2::pointcloud pc;
// We want the points object to be persistent so we can display the last cloud when a frame drops
rs2::points points;

// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe;

// Start streaming with default recommended configuration
pipe.start();

// Declare filters
rs2::decimation_filter dec_filter;  // Decimation - reduces depth frame density
rs2::threshold_filter thr_filter;   // Threshold  - removes values outside recommended range
rs2::spatial_filter spat_filter;    // Spatial    - edge-preserving spatial smoothing
rs2::temporal_filter temp_filter;   // Temporal   - reduces temporal noise
rs2::disparity_transform depth_to_disparity(true);
rs2::disparity_transform disparity_to_depth(false);

// Initialize a vector that holds filters and their options
std::vector<rs2::filter*> filters;
    
// The following order of emplacement will dictate the orders in which filters are applied
filters.emplace_back(&dec_filter);
filters.emplace_back(&thr_filter);
filters.emplace_back(&depth_to_disparity);
filters.emplace_back(&spat_filter);
filters.emplace_back(&temp_filter);
filters.emplace_back(&disparity_to_depth);


while (app) // Application still alive?
{
    // Wait for the next set of frames from the camera
    auto frames = pipe.wait_for_frames();

    rs2::video_frame color = frames.get_color_frame();

    // For cameras that don't have RGB sensor, we'll map the pointcloud to infrared instead of color
    if (!color)
        color = frames.get_infrared_frame();

    rs2::depth_frame depth = frames.get_depth_frame();

    int centerX = depth.get_width() / 2;
    int centerY = depth.get_height() / 2;


    // A: Pre-filtered
    float prefiltered_distance = depth.get_distance(centerX, centerY);

    
    // B: Filter frames
    for (auto filter : filters)
    {
        depth = (*filter).process(depth);
    }
    
    
    // C: Post-filtered (fails)
    float postfiltered_distance = depth.get_distance(centerX, centerY);


    // Tell pointcloud object to map to this color frame
    pc.map_to(color);
    // Generate the pointcloud and texture mappings
    points = pc.calculate(depth);
    
    // ...

}

為什么調用depth.get_distance(centerX, centerY); 過濾框架之前工作正常,但在過濾失敗調用相同的 function out of range value for argument "y"

簡而言之,如何獲得像素在x,y處的過濾距離( z )?

抽取過濾器會降低圖像分辨率,因此您應該在運行過濾器后再次檢查分辨率並更新您的centerXcenterY變量,使其不再超出范圍。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM