简体   繁体   English

AVCaptureSession视频预览,无需音频处理

[英]AVCaptureSession video preview without audio processing

I'm using AVCaptureSession to preview video in an augmented reality type app on iPhone. 我正在使用AVCaptureSession在iPhone上的增强现实类型应用中预览视频。 Since I'm also drawing OpenGL graphics on top of the video preview, the app is quite energy consuming. 由于我还在视频预览之上绘制OpenGL图形,因此该应用程序非常耗能。 I want to minimize cpu usage to save battery. 我想最小化CPU使用以节省电池。

When I check the app with Instruments/Energy usage, I see that a considerable portion (~20%) of the CPU is "wasted" on audio processing. 当我用仪器/能源使用检查应用程序时,我发现相当一部分(约20%)的CPU在音频处理上“浪费”了。 If I remove my capture session, audio processing takes no CPU, as expected. 如果我删除了捕获会话,则音频处理不会像预期的那样占用CPU。

I don't understand why the capture session is doing audio processing since I haven't added any audio device input into it. 我不明白为什么捕获会话正在进行音频处理,因为我没有添加任何音频设备输入。 Here's how I set up the session: 以下是我设置会话的方法:

if(!captureSession) {
  captureSession = [[AVCaptureSession alloc] init];
  AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
  if (videoDevice) {
    NSError *error;
    AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
    if (!error) {
      if ([captureSession canAddInput:videoIn]) {
        [captureSession addInput:videoIn];
      }
    }
  }
}

if(!previewLayer) {
  previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];
  [previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
}

CGRect layerRect = [[viewBg layer] bounds];
[previewLayer setBounds:layerRect];
[previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect), CGRectGetMidY(layerRect))];
[[viewBg layer] addSublayer:previewLayer];

[captureSession startRunning];                                              

Is there a way to disable audio (input) altogether or how could I get rid of the audio processing CPU usage while previewing video input? 有没有办法完全禁用音频(输入)或者如何在预览视频输入时摆脱音频处理CPU的使用?

As an even larger performance optimization, may I suggest not using non-opaque OpenGL ES content overlaid on an AVCaptureVideoPreviewLayer? 作为更大的性能优化,我是否可以建议不要在AVCaptureVideoPreviewLayer上使用非不透明的OpenGL ES内容? Instead, you'll get much better rendering performance by grabbing your camera feed, uploading that as a texture to render behind your augmented reality content, then rendering your content in front of a screen-sized textured quad containing your camera texture. 相反,您可以通过抓取相机源,将其作为纹理上传以在渲染增强现实内容后渲染,然后在包含相机纹理的屏幕大小纹理四边形前渲染内容,从而获得更好的渲染性能。

From personal experience, rendering non-opaque OpenGL ES content causes a serious slowdown due to the compositing that needs to be performed in that case. 根据个人经验,渲染非不透明的OpenGL ES内容会导致严重的减速,因为在这种情况下需要执行合成。 Taking in your camera frames and displaying them as a background within your OpenGL ES scene will let you set your OpenGL ES hosting view to be opaque, which is far more efficient. 接收相机帧并将其作为OpenGL ES场景中的背景显示将允许您将OpenGL ES主机视图设置为不透明,这样效率更高。

I have some sample code for this as part of an object tracking example, but a more efficient version of camera capture and uploading can be found within the GPUImageVideoCamera class in my open source GPUImage framework. 我有一些示例代码作为对象跟踪示例的一部分,但可以在我的开源GPUImage框架中的GPUImageVideoCamera类中找到更高效的摄像头捕获和上载版本。 Also, in my profiling of the code for that framework, I've not seen audio recording occurring without an audio input configured as part of the session, so you could examine what I do there. 此外,在我对该框架的代码进行概要分析时,我没有看到在没有配置为会话的一部分的音频输入的情况下进行录音,因此您可以检查我在那里做的事情。

I have the same problem. 我也有同样的问题。 I am scanning barcodes with AV Foundation and have no interest in audio. 我正在使用AV Foundation扫描条形码,对音频没兴趣。 Yet, about 20% is wasted on "Audio Processing" an my iPhone 5S. 然而,大约20%被浪费在“音频处理”和我的iPhone 5S上。

I filed this bug report for it. 我提交了这个错误报告 You are welcome to dupe it. 欢迎你来欺骗它。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM