简体   繁体   中英

SwiftUI - scanning QR codes in the macOS app

I am working on a Mac application that will read the QR code from the camera and then read the text from it.

Unfortunately, CodeScanner package is only for iOS and there is no method called:

func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection)

Of course, to use it, the AVCaptureMetadataOutputObjectsDelegate has to be included - Unfortunately, this protocol is not available on macOS.

Is there any way to create a macOS QR code scanner using SwiftUI?

I currently have a MacBook camera preview built-in, but I miss the thing to catch QR codes is missing:

final class PlayerContainerView: NSViewRepresentable {
    typealias NSViewType = PlayerView
    
    let captureSession: AVCaptureSession
    
    init(captureSession: AVCaptureSession) {
        self.captureSession = captureSession
    }
    
    func makeNSView(context: Context) -> PlayerView {
        return PlayerView(captureSession: captureSession)
    }
    
    func updateNSView(_ nsView: PlayerView, context: Context) {}
}

class PlayerView: NSView {
    var previewLayer: AVCaptureVideoPreviewLayer?
    
    init(captureSession: AVCaptureSession) {
        previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        super.init(frame: .zero)
        
        setupLayer()
    }
    
    func setupLayer() {
        previewLayer?.frame = self.frame
        previewLayer?.contentsGravity = .resizeAspectFill
        previewLayer?.videoGravity = .resizeAspectFill
        previewLayer?.connection?.automaticallyAdjustsVideoMirroring = false
        
        if let mirroringSupported = previewLayer?.connection?.isVideoMirroringSupported {
            if mirroringSupported {
                previewLayer?.connection?.automaticallyAdjustsVideoMirroring = false
                previewLayer?.connection?.isVideoMirrored = true
            }
        }

        
        layer = previewLayer
    }
    
    required init?(coder: NSCoder) {
        fatalError("init(coder:) has not been implemented")
    }
}

I couldn't find any delegation protocol which would work like AVCaptureMetadataOutputObjectsDelegate and allowed me to catch the metadata objects.

I too was was surprised that macOS AVFoundation didn't support AVCaptureMetadataOutput , but CoreImage supports QR codes, and this worked for me:

let qrDetector = CIDetector(ofType: CIDetectorTypeQRCode, context: nil, options: nil)!

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
    
    let ciImage = CIImage(cvImageBuffer: imageBuffer)
    let features = qrDetector.features(in: ciImage)

    for feature in features {
        let qrCodeFeature = feature as! CIQRCodeFeature
        print("messageString \(qrCodeFeature.messageString!)")
    }
}

There's probably a more efficient way to do this.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM