[英]captureOutput from AVCaptureVideoDataOutputSampleBufferDelegate is not being called
[英]captureOutput not being called by AVCaptureAudioDataOutputSampleBufferDelegate
我有一個可以錄制視頻的應用程序,但是我需要它向用戶實時顯示在麥克風上捕獲的聲音的音高。 我已經能夠使用AVCaptureSession
成功地將音頻和視頻錄制到MP4。 但是,當我將AVCaptureAudioDataOutput
添加到會話並分配AVCaptureAudioDataOutputSampleBufferDelegate
我沒有收到任何錯誤,但是一旦會話開始,就永遠不會調用captureOutput
函數。
這是代碼:
import UIKit
import AVFoundation
import CoreLocation
class ViewController: UIViewController,
AVCaptureVideoDataOutputSampleBufferDelegate,
AVCaptureFileOutputRecordingDelegate, CLLocationManagerDelegate ,
AVCaptureAudioDataOutputSampleBufferDelegate {
var videoFileOutput: AVCaptureMovieFileOutput!
let session = AVCaptureSession()
var outputURL: URL!
var timer:Timer!
var locationManager:CLLocationManager!
var currentMagnitudeValue:CGFloat!
var defaultMagnitudeValue:CGFloat!
var visualMagnitudeValue:CGFloat!
var soundLiveOutput: AVCaptureAudioDataOutput!
override func viewDidLoad() {
super.viewDidLoad()
self.setupAVCapture()
}
func setupAVCapture(){
session.beginConfiguration()
//Add the camera INPUT to the session
let videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video, position: .front)
guard
let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice!),
session.canAddInput(videoDeviceInput)
else { return }
session.addInput(videoDeviceInput)
//Add the microphone INPUT to the session
let microphoneDevice = AVCaptureDevice.default(.builtInMicrophone, for: .audio, position: .unspecified)
guard
let audioDeviceInput = try? AVCaptureDeviceInput(device: microphoneDevice!),
session.canAddInput(audioDeviceInput)
else { return }
session.addInput(audioDeviceInput)
//Add the video file OUTPUT to the session
videoFileOutput = AVCaptureMovieFileOutput()
guard session.canAddOutput(videoFileOutput) else {return}
if (session.canAddOutput(videoFileOutput)) {
session.addOutput(videoFileOutput)
}
//Add the audio output so we can get PITCH of the sounds
//AND assign the SampleBufferDelegate
soundLiveOutput = AVCaptureAudioDataOutput()
soundLiveOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "test"))
if (session.canAddOutput(soundLiveOutput)) {
session.addOutput(soundLiveOutput)
print ("Live AudioDataOutput added")
} else
{
print("Could not add AudioDataOutput")
}
//Preview Layer
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
let rootLayer :CALayer = self.cameraView.layer
rootLayer.masksToBounds=true
previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(previewLayer)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill;
//Finalize the session
session.commitConfiguration()
//Begin the session
session.startRunning()
}
func captureOutput(_: AVCaptureOutput, didOutput: CMSampleBuffer, from:
AVCaptureConnection) {
print("Bingo")
}
}
預期產量:
Bingo
Bingo
Bingo
...
我讀過了:
StackOverflow:未調用captureOutput-用戶未正確聲明captureOutput方法。
StackOverflow:未調用AVCaptureVideoDataOutput captureOutput-用戶根本沒有聲明captureOutput方法。
蘋果-AVCaptureAudioDataOutputSampleBufferDelegate-蘋果有關委托的文檔及其方法-該方法與我聲明的方法匹配。
我在網上遇到的其他常見錯誤:
AVCaptureMetadataOutput
替代了AVCaptureAudioDataOutput
盡管我在Apple文檔中找不到此內容,但我也嘗試過這樣做,但是類似地,從未調用過metadataOutput
函數。 我沒有想法。 我是否缺少明顯的東西?
您正在使用的方法已與此方法進行了更新, AVCaptureAudioDataOutput和AVCaptureVideoDataOutput都會被調用。 您確保在將樣本緩沖區寫入資產編寫器之前檢查輸出。
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//Make sure you check the output before using sample buffer
if output == audioDataOutput {
//Use sample buffer for audio
}
}
好的,沒有人找我,但是在玩了之后,我得出了正確的方法來聲明Swift4的captureOutput方法,如下所示:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//Do your stuff here
}
不幸的是,此在線文檔非常貧乏。 我猜您只需要完全正確就可以了-如果您將變量拼寫錯誤或命名錯誤,則不會引發任何錯誤,因為它是一個可選函數。
對我來說,問題出在這里,而AVAudioSession和AVCaptureSession被聲明為局部變量,而當我開始會話時,它就消失了。 一旦將它們移到類級別的變量上,一切都很好!
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.