简体   繁体   中英

How to convert opencv cv::Mat to CVPixelBuffer

I'm an undergraduate student and I'm doing some HumanSeg iPhone app using CoreML . Since my model needs resizing and black padding on the original video frames, I can't rely on Vision (which only provides resizing but no black padding) and have to do the converting myself.

I have CVPixelBuffer frames and I have converted it into cv::Mat using the following codes:

CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int bufferWidth = (int) CVPixelBufferGetWidth(pixelBuffer);
int bufferHeight = (int) CVPixelBufferGetHeight(pixelBuffer);
int bytePerRow = (int) CVPixelBufferGetBytesPerRow(pixelBuffer);
unsigned char *pixel = (unsigned char *) CVPixelBufferGetBaseAddress(pixelBuffer);
Mat image = Mat(bufferHeight, bufferWidth, CV_8UC4, pixel, bytePerRow);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
/*I'll do my resizing and padding here*/

// How can I implement this function?
convertToCVPixelBuffer(image);

But now, after I've done my preprocessing works, I have to convert the cv::Mat back to a CVPixelBuffer to feed it to the CoreML model. How can I achieve this? (Or can Vision achieve black padding using some special techniques?)

Any help will be appreciated.

First, convert mat to UIImage (or any other class from iOS APIs), check this question . Then, convert resulting image to CVPixelBuffer like this .

Please see below the code... Checking whether width and height is divisible by 64 is necessary or else we get weird results due to BytesPerRow mismatch with cv::Mat and CVPixelBuffer

CVPixelBufferRef getImageBufferFromMat(cv::Mat matimg) {
    cv::cvtColor(matimg, matimg, CV_BGR2BGRA);
    
    /* Very much required see https://stackoverflow.com/questions/66434552/objective-c-cvmat-to-cvpixelbuffer
     height & width has to be multiple of 64 for better caching
     */
    int widthReminder = matimg.cols % 64, heightReminder = matimg.rows % 64;
    if (widthReminder != 0 || heightReminder != 0) {
        cv::resize(matimg, matimg, cv::Size(matimg.cols + (64 - widthReminder), matimg.rows + (64 - heightReminder)));
    }
    
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool: YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool: YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             [NSNumber numberWithInt: matimg.cols], kCVPixelBufferWidthKey,
                             [NSNumber numberWithInt: matimg.rows], kCVPixelBufferHeightKey,
                             [NSNumber numberWithInt: matimg.step[0]], kCVPixelBufferBytesPerRowAlignmentKey,
                             nil];
    CVPixelBufferRef imageBuffer;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, matimg.cols, matimg.rows, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer) ;
    NSParameterAssert(status == kCVReturnSuccess && imageBuffer != NULL);
    
    CVPixelBufferLockBaseAddress(imageBuffer, 0);
    void *base = CVPixelBufferGetBaseAddress(imageBuffer);
    memcpy(base, matimg.data, matimg.total() * matimg.elemSize());
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
    
    return imageBuffer;
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM