简体   繁体   English

如何克服 IOS 中实时摄像机视图的缓慢问题

[英]How to overcome slowness of live camera view in IOS

I am trying to develop an image segmentation app and process the live camera view in my coreml model.我正在尝试开发一个图像分割应用程序并在我的 coreml model 中处理实时摄像头视图。 However I see some slowness on the output.但是我看到 output 的速度有些慢。 Camera view with masked prediction is slower.带有掩码预测的相机视图较慢。 Below is my vision manager class to predict the pixelbuffer and function calling this class to convert to colors before proceed to camera output. Below is my vision manager class to predict the pixelbuffer and function calling this class to convert to colors before proceed to camera output. Anyone facing this issue before?以前有人遇到过这个问题吗? Do you see an error in my code causing slowness?您是否在我的代码中看到导致运行缓慢的错误?

Vision Manager Class:视觉管理器 Class:

class VisionManager: NSObject {
static let shared = VisionManager()
static let MODEL = ba_224_segm().model

private lazy var predictionRequest: VNCoreMLRequest = {
    do{
        let model = try VNCoreMLModel(for: VisionManager.MODEL)
        let request = VNCoreMLRequest(model: model)
        request.imageCropAndScaleOption = VNImageCropAndScaleOption.centerCrop
        return request
    } catch {
        fatalError("can't load Vision ML Model")
    }
}()

func predict(pixelBuffer: CVImageBuffer, sampleBuffer: CMSampleBuffer, onResult: ((_ observations: [VNCoreMLFeatureValueObservation]) -> Void)) {
    var requestOptions: [VNImageOption: Any] = [:]
    if let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil) {
        requestOptions = [.cameraIntrinsics: cameraIntrinsicData]
    }
    
    let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: requestOptions)
    do {
        try handler.perform([predictionRequest])
    } catch {
        print("error handler")
    }

    guard let observations = predictionRequest.results as? [VNCoreMLFeatureValueObservation] else {
        fatalError("unexpected result type from VNCoreMLRequest")
    }
    onResult(observations)
}

Predicted Camera Output function:预测相机 Output function:

func handleCameraOutput(pixelBuffer: CVImageBuffer, sampleBuffer: CMSampleBuffer, onFinish: @escaping ((_ image: UIImage?) -> Void)) {
    VisionManager.shared.predict(pixelBuffer: pixelBuffer, sampleBuffer: sampleBuffer) { [weak self ] (observations) in
        
        if let multiArray: MLMultiArray = observations[0].featureValue.multiArrayValue {
            
            mask = maskEdit.maskToRGBA(maskArray: MultiArray<Float32>(multiArray), rgba: (Float(r),Float(g),Float(b),Float(a)))!
            maskInverted = maskEdit.maskToRGBAInvert(maskArray: MultiArray<Float32>(multiArray), rgba: (r: 1.0, g: 1.0, b:1.0, a: 0.4))!
           
            
            let image = maskEdit.mergeMaskAndBackground( invertedMask: maskInverted, mask: mask, background: pixelBuffer, size: Int(size))
            
            
            DispatchQueue.main.async {
                onFinish(image)
            }
        }
    }

I call these models under viwDidAppear as below:我在 viwDidAppear 下将这些模型称为如下:

        CameraManager.shared.setDidOutputHandler { [weak self] (output, pixelBuffer, sampleBuffer, connection) in
            
            self!.maskColor.getRed(&self!.r, green:&self!.g, blue:&self!.b, alpha:&self!.a)
            self!.a = 0.5
            self?.handleCameraOutput(pixelBuffer: pixelBuffer, sampleBuffer: sampleBuffer, onFinish: { (image) in
            
         
            self?.predictionView.image = image
            })
        }

It takes time for your model to perform the segmentation, and then it takes time to convert the output into an image.您的 model 执行分割需要时间,然后将 output 转换为图像需要时间。 There is not much you can do to make this delay shorter, except for making the model smaller and making sure the output -> image conversion code is as fast as possible.除了使 model 更小并确保 output -> 图像转换代码尽可能快之外,您无法做太多事情来缩短此延迟。

I have found out my issue about not using different thread.我发现了关于不使用不同线程的问题。 Since I am new developer I don't know such details and still learning thanks to experts in the field and their shared knowledge.由于我是新开发人员,因此我不知道这些细节,并且由于该领域的专家和他们共享的知识,我仍在学习。 Please see my old and new captureOutput function.请看我的新旧捕获输出 function。 To use a different thread solved my problem:使用不同的线程解决了我的问题:

old status:旧状态:

    public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
        else { return }


    self.handler?(output, pixelBuffer, sampleBuffer, connection)

    self.onCapture?(pixelBuffer, sampleBuffer)
    self.onCapture = nil

}

and new status:和新状态:

    public func captureOutput(_ output: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
                          from connection: AVCaptureConnection) {
    if currentBuffer == nil{
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    
currentBuffer = pixelBuffer
DispatchQueue.global(qos: .userInitiated).async {

    self.handler?(output, self.currentBuffer!, sampleBuffer, connection)
    
self.currentBuffer = nil

        }
        
}

}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM