簡體   English   中英

如何從CVPixelBufferRef轉換為openCV cv :: Mat

[英]How do I convert from a CVPixelBufferRef to an openCV cv::Mat

我想對CVPixelBufferRef執行一些操作,並提供一個cv::Mat

  • 裁剪到感興趣的區域
  • 縮放到固定尺寸
  • 均衡直方圖
  • 轉換為灰度-每像素8位( CV_8UC1

我不確定執行此操作最有效的順序是什么,但是,我知道所有操作都可以在open:CV矩陣上進行,因此我想知道如何進行轉換。

- (void) captureOutput:(AVCaptureOutput *)captureOutput 
         didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
         fromConnection:(AVCaptureConnection *)connection
{
     CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

     cv::Mat frame = f(pixelBuffer); // how do I implement f()?

我在一些出色的GitHub源代碼中找到了答案。 為了簡單起見,我在這里對其進行了修改。 它還為我完成了灰度轉換。

CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);

// Set the following dict on AVCaptureVideoDataOutput's videoSettings to get YUV output
// @{ kCVPixelBufferPixelFormatTypeKey : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange }

NSAssert(format == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, @"Only YUV is supported");

// The first plane / channel (at index 0) is the grayscale plane
// See more infomation about the YUV format
// http://en.wikipedia.org/wiki/YUV
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

CGFloat width = CVPixelBufferGetWidth(pixelBuffer);
CGFloat height = CVPixelBufferGetHeight(pixelBuffer);

cv::Mat mat(height, width, CV_8UC1, baseaddress, 0);

// Use the mat here

CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

我認為最好的順序是:

  1. 轉換為灰度(因為它幾乎是自動完成的)
  2. 裁切(這應該是一種快速的操作,並將減少要使用的像素數)
  3. 縮小
  4. 均衡直方圖

我正在用這個。 我的cv:Mat配置為BGR(8UC3)colorFormat。

CVImageBufferRef-> cv ::墊

- (cv::Mat) matFromImageBuffer: (CVImageBufferRef) buffer {

    cv::Mat mat ;

    CVPixelBufferLockBaseAddress(buffer, 0);

    void *address = CVPixelBufferGetBaseAddress(buffer);
    int width = (int) CVPixelBufferGetWidth(buffer);
    int height = (int) CVPixelBufferGetHeight(buffer);

    mat   = cv::Mat(height, width, CV_8UC4, address, 0);
    //cv::cvtColor(mat, _mat, CV_BGRA2BGR);

    CVPixelBufferUnlockBaseAddress(buffer, 0);

    return mat;
}

cv :: Mat-> CVImageBufferRef(CVPixelBufferRef)

- (CVImageBufferRef) getImageBufferFromMat: (cv::Mat) mat {

    cv::cvtColor(mat, mat, CV_BGR2BGRA);

    int width = mat.cols;
    int height = mat.rows;

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             // [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             // [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             [NSNumber numberWithInt:width], kCVPixelBufferWidthKey,
                             [NSNumber numberWithInt:height], kCVPixelBufferHeightKey,
                             nil];

    CVPixelBufferRef imageBuffer;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, width, height, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer) ;


    NSParameterAssert(status == kCVReturnSuccess && imageBuffer != NULL);

    CVPixelBufferLockBaseAddress(imageBuffer, 0);
    void *base = CVPixelBufferGetBaseAddress(imageBuffer) ;
    memcpy(base, mat.data, _mat.total()*4);
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

    return imageBuffer;
}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM