简体   繁体   English

如何显着降低 iOS 应用中的能源影响?

[英]How to significantly reduce Energy Impact in iOS app?

I'm developing an ARKit app with Vision framework capabilities (handling CoreML model).我正在开发具有 Vision 框架功能(处理 CoreML 模型)的 ARKit 应用程序。

loopCoreMLUpdate() function makes a loop which leads to Very High Energy Impact (CPU=70%, GPU=66%). loopCoreMLUpdate()函数形成一个循环,导致非常高的能量影响(CPU=70%,GPU=66%)。

How to handle this task and decrease Energy Impact to LOW level ?如何处理这项任务并将能源影响降低到低水平

What is a workaround for this loop issue that will help me decrease a CPU/GPU workload ?这个循环问题的解决方法是什么,可以帮助我减少 CPU/GPU 工作负载

Here'a my code:这是我的代码:

import UIKit
import SpriteKit
import ARKit
import Vision

class ViewController: UIViewController, ARSKViewDelegate {

    @IBOutlet weak var sceneView: ARSKView!
    let dispatchQueueML = DispatchQueue(label: "AI")
    var visionRequests = [VNRequest]()

    // .........................................
    // .........................................

    override func viewDidAppear(_ animated: Bool) {
        super.viewDidAppear(animated)
        let configuration = AROrientationTrackingConfiguration()
        sceneView.session.run(configuration)

        loopCoreMLUpdate()
    }

    func loopCoreMLUpdate() {          
        dispatchQueueML.async {
            self.loopCoreMLUpdate()  // SELF-LOOP LEADS TO A VERY HIGH IMPACT
            self.updateCoreML()
        }
    }

    func updateCoreML() {
        let piBuffer: CVPixelBuffer? = (sceneView.session.currentFrame?.capturedImage)
        if piBuffer == nil { return }
        let ciImage = CIImage(cvPixelBuffer: piBuffer!)
        let imageRequestHandler = VNImageRequestHandler(ciImage: ciImage, options: [:])

        do {
            try imageRequestHandler.perform(self.visionRequests)
        } catch {
            print(error)
        }
    }
    // .........................................
    // .........................................
}

Yes, the line you've marked would definitely be a huge problem.是的,你标记的那条线肯定是个大问题。 You're not looping here;你不是在这里循环; you're spawning new async tasks as fast as you can, before the previous one even completes.您正在以最快的速度生成新的异步任务,甚至在前一个任务完成之前。 In any case, you're trying to capture CVPixelBuffers faster then they're created, which is a huge waste.在任何情况下,您都试图比创建 CVPixelBuffers 更快地捕获它们,这是一种巨大的浪费。

If you want to capture frames, you don't create a tight loop to sample them.如果你想捕捉帧,你不需要创建一个紧密的循环来对它们进行采样。 You set yourself as the ARSessionDelegate and implement session(_:didUpdate:) .您将自己设置为 ARSessionDelegate 并实现session(_:didUpdate:) The system will tell you when there's a new frame available.当有新框架可用时,系统会告诉您。 (It is possible to create your own rendering loop, but you're not doing that here, and you shouldn't unless you really need your own rendering pipeline.) (可以创建自己的渲染循环,但您不会在这里这样做,除非您确实需要自己的渲染管道,否则您不应该这样做。)

Keep in mind that you will receive a lot of frames very quickly.请记住,您将很快收到大量帧。 30fps or 60fps are very common, but it can be as high as 120fps. 30fps 或 60fps 很常见,但也可能高达 120fps。 You cannot use all of that time slice (other things need processor time, too).你不能使用所有的时间片(其他事情也需要处理器时间)。 The point is that you often will not be able to keep up with the frame rate and will either need to buffer for later processing, or drop frames, or both.关键是您通常无法跟上帧速率,并且需要缓冲以供以后处理或丢帧,或两者兼而有之。 This is a very normal part of real-time processing.这是实时处理的一个非常正常的部分。

For this kind of classifying system, you probably want to choose your actual frame rate, maybe as low as 10-20fps, and skip frames in order to maintain that rate.对于这种分类系统,您可能希望选择您的实际帧速率,可能低至 10-20fps,并跳过帧以保持该速率。 Classifying dozens of nearly-identical frames is not likely helpful.对几十个几乎相同的帧进行分类可能没有帮助。

That said, make sure you've read Recognizing Objects in Live Capture .也就是说,请确保您已阅读实时捕获中的识别对象 It feels like that's what you're trying to do, and there's good sample code available for that.感觉这就是你想要做的,并且有很好的示例代码可用。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM