繁体   English   中英

如何克服 IOS 中实时摄像机视图的缓慢问题

[英]How to overcome slowness of live camera view in IOS

我正在尝试开发一个图像分割应用程序并在我的 coreml model 中处理实时摄像头视图。 但是我看到 output 的速度有些慢。 带有掩码预测的相机视图较慢。 Below is my vision manager class to predict the pixelbuffer and function calling this class to convert to colors before proceed to camera output. 以前有人遇到过这个问题吗? 您是否在我的代码中看到导致运行缓慢的错误?

视觉管理器 Class:

class VisionManager: NSObject {
static let shared = VisionManager()
static let MODEL = ba_224_segm().model

private lazy var predictionRequest: VNCoreMLRequest = {
    do{
        let model = try VNCoreMLModel(for: VisionManager.MODEL)
        let request = VNCoreMLRequest(model: model)
        request.imageCropAndScaleOption = VNImageCropAndScaleOption.centerCrop
        return request
    } catch {
        fatalError("can't load Vision ML Model")
    }
}()

func predict(pixelBuffer: CVImageBuffer, sampleBuffer: CMSampleBuffer, onResult: ((_ observations: [VNCoreMLFeatureValueObservation]) -> Void)) {
    var requestOptions: [VNImageOption: Any] = [:]
    if let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil) {
        requestOptions = [.cameraIntrinsics: cameraIntrinsicData]
    }
    
    let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: requestOptions)
    do {
        try handler.perform([predictionRequest])
    } catch {
        print("error handler")
    }

    guard let observations = predictionRequest.results as? [VNCoreMLFeatureValueObservation] else {
        fatalError("unexpected result type from VNCoreMLRequest")
    }
    onResult(observations)
}

预测相机 Output function:

func handleCameraOutput(pixelBuffer: CVImageBuffer, sampleBuffer: CMSampleBuffer, onFinish: @escaping ((_ image: UIImage?) -> Void)) {
    VisionManager.shared.predict(pixelBuffer: pixelBuffer, sampleBuffer: sampleBuffer) { [weak self ] (observations) in
        
        if let multiArray: MLMultiArray = observations[0].featureValue.multiArrayValue {
            
            mask = maskEdit.maskToRGBA(maskArray: MultiArray<Float32>(multiArray), rgba: (Float(r),Float(g),Float(b),Float(a)))!
            maskInverted = maskEdit.maskToRGBAInvert(maskArray: MultiArray<Float32>(multiArray), rgba: (r: 1.0, g: 1.0, b:1.0, a: 0.4))!
           
            
            let image = maskEdit.mergeMaskAndBackground( invertedMask: maskInverted, mask: mask, background: pixelBuffer, size: Int(size))
            
            
            DispatchQueue.main.async {
                onFinish(image)
            }
        }
    }

我在 viwDidAppear 下将这些模型称为如下:

        CameraManager.shared.setDidOutputHandler { [weak self] (output, pixelBuffer, sampleBuffer, connection) in
            
            self!.maskColor.getRed(&self!.r, green:&self!.g, blue:&self!.b, alpha:&self!.a)
            self!.a = 0.5
            self?.handleCameraOutput(pixelBuffer: pixelBuffer, sampleBuffer: sampleBuffer, onFinish: { (image) in
            
         
            self?.predictionView.image = image
            })
        }

您的 model 执行分割需要时间,然后将 output 转换为图像需要时间。 除了使 model 更小并确保 output -> 图像转换代码尽可能快之外,您无法做太多事情来缩短此延迟。

我发现了关于不使用不同线程的问题。 由于我是新开发人员,因此我不知道这些细节,并且由于该领域的专家和他们共享的知识,我仍在学习。 请看我的新旧捕获输出 function。 使用不同的线程解决了我的问题:

旧状态:

    public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
        else { return }


    self.handler?(output, pixelBuffer, sampleBuffer, connection)

    self.onCapture?(pixelBuffer, sampleBuffer)
    self.onCapture = nil

}

和新状态:

    public func captureOutput(_ output: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
                          from connection: AVCaptureConnection) {
    if currentBuffer == nil{
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    
currentBuffer = pixelBuffer
DispatchQueue.global(qos: .userInitiated).async {

    self.handler?(output, self.currentBuffer!, sampleBuffer, connection)
    
self.currentBuffer = nil

        }
        
}

}

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM