简体   繁体   中英

Live Text Recognition (OCR)

I was wondering if it is possible to operate OCR on an iPhone live camera mode without snapping a photo? The alphanumeric text is following a predictable or sometimes fixed combination (something like serial number).

I've tried OpenCV and Tesseract but I couldn't figure it out the way to have have some image processing on the live camera feed.

I just don't know the part that I have to recognise the text that I am expecting! Is there any other libraries that I can use to do this part?

You can achieve that with TesseractOCR and using AVCaptureSession .

@interface YourClass()
{
    BOOL canScanFrame;
    BOOL isScanning;
}
@property (strong, nonatomic) NSTimer *timer;

@end

@implementation YourClass
//...
- (void)prepareToScan
{
    //Prepare capture session, preview layer and so on
    //...

    self.timer = [NSTimer scheduledTimerWithTimeInterval:0.5 target:self selector:@selector(timerTicked) userInfo:nil repeats:YES];
}

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
{
    if (canScanFrame) {
        canScanFrame = NO;

        CGImageRef imageRef = [self imageFromSampleBuffer:sampleBuffer];
        UIImage *image = [UIImage imageWithCGImage:imageRef scale:1 orientation:UIImageOrientationRight];
        CGImageRelease(imageRef);

        [self.scanner setImage:image];

        isScanning = YES;
        dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
            NSLog(@"scan start");
            [self.scanner recognize];
            NSLog(@"scan stop");
            dispatch_async(dispatch_get_main_queue(), ^{
                isScanning = NO;
                NSString *text = [self.scanner recognizedText];
                //do something with text                     
            });
        });
    }
}

- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer,0);        // Lock the image buffer

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);   // Get information of the image
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);
    CGContextRelease(newContext);

    CGColorSpaceRelease(colorSpace);
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    return newImage;
}
- (void)timerTicked
{
    if (!isScanning) {
        canScanFrame = YES;
    }
}

@end

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM