简体   繁体   English

AVAssetWriter 视频输出不播放附加音频

[英]AVAssetWriter Video Output Does Not Play Appended Audio

I have an avassetwriter to record a video with an applied filter to then play back via avqueueplayer .我有一个avassetwriter来录制带有应用过滤器的视频,然后通过avqueueplayer播放。

My issue is the audio output appends to the audio input, but no sound plays in the play back.我的问题是音频输出附加到音频输入,但在播放中没有声音播放。 Have not come across any existing solutions, and would appreciate any guidance available..还没有遇到任何现有的解决方案,并希望得到任何可用的指导。

Secondarily, my .AVPlayerItemDidPlayToEndTime notification observer, which I have to loop the playback, does not fire as well..其次,我必须循环播放的.AVPlayerItemDidPlayToEndTime通知观察者也不会触发。

AVCaptureSession Setup AVCaptureSession 设置

func setupSession() {
    
    let session = AVCaptureSession()
    session.sessionPreset = .medium
    
    guard
        let camera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front),
        let mic = AVCaptureDevice.default(.builtInMicrophone, for: .audio, position: .unspecified),
        let videoInput = try? AVCaptureDeviceInput(device: camera),
        let audioInput = try? AVCaptureDeviceInput(device: mic),
        session.canAddInput(videoInput), session.canAddInput(audioInput) else { return }
    
            
    let videoOutput = AVCaptureVideoDataOutput()
    let audioOutput = AVCaptureAudioDataOutput()
    guard session.canAddOutput(videoOutput), session.canAddOutput(audioOutput) else { return }
    let queue = DispatchQueue(label: "recordingQueue", qos: .userInteractive)
    videoOutput.setSampleBufferDelegate(self, queue: queue)
    audioOutput.setSampleBufferDelegate(self, queue: queue)
    
    session.beginConfiguration()
    
    session.addInput(videoInput)
    session.addInput(audioInput)
    session.addOutput(videoOutput)
    session.addOutput(audioOutput)
    
    session.commitConfiguration()
            
    if let connection = videoOutput.connection(with: AVMediaType.video) {
        if connection.isVideoStabilizationSupported { connection.preferredVideoStabilizationMode = .auto }
        connection.isVideoMirrored = true
        connection.videoOrientation = .portrait
    }
    
    _videoOutput = videoOutput
    _audioOutput = audioOutput
    _captureSession = session
    
    DispatchQueue.global(qos: .default).async { session.startRunning() }
}

AVAssetWriter Setup + didOutput Delegate AVAssetWriter 设置 + didOutput 委托

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
            
    let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds

if output == _videoOutput {
    if connection.isVideoOrientationSupported { connection.videoOrientation = .portrait }
            
    guard let cvImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
    let ciImage = CIImage(cvImageBuffer: cvImageBuffer)
    
    guard let filteredCIImage = applyFilters(inputImage: ciImage) else { return }
    self.ciImage = filteredCIImage
    
    guard let cvPixelBuffer = getCVPixelBuffer(from: filteredCIImage) else { return }
    self.cvPixelBuffer = cvPixelBuffer
            
    self.ciContext.render(filteredCIImage, to: cvPixelBuffer, bounds: filteredCIImage.extent, colorSpace: CGColorSpaceCreateDeviceRGB())
            
    metalView.draw()
   }
            
    switch _captureState {
    case .start:
        
        guard let outputUrl = tempURL else { return }
        
        let writer = try! AVAssetWriter(outputURL: outputUrl, fileType: .mp4)
        
        let videoSettings = _videoOutput!.recommendedVideoSettingsForAssetWriter(writingTo: .mp4)
        let videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings)
        videoInput.mediaTimeScale = CMTimeScale(bitPattern: 600)
        videoInput.expectsMediaDataInRealTime = true
        
        let pixelBufferAttributes = [
            kCVPixelBufferCGImageCompatibilityKey: NSNumber(value: true),
            kCVPixelBufferCGBitmapContextCompatibilityKey: NSNumber(value: true),
            kCVPixelBufferPixelFormatTypeKey: NSNumber(value: Int32(kCVPixelFormatType_32ARGB))
        ] as [String:Any]
        
        let adapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoInput, sourcePixelBufferAttributes: pixelBufferAttributes)
        if writer.canAdd(videoInput) { writer.add(videoInput) }
                                
        let audioSettings = _audioOutput!.recommendedAudioSettingsForAssetWriter(writingTo: .mp4) as? [String:Any]
        let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings)
        audioInput.expectsMediaDataInRealTime = true
        if writer.canAdd(audioInput) { writer.add(audioInput) }
    
        _filename = outputUrl.absoluteString
        _assetWriter = writer
        _assetWriterVideoInput = videoInput
        _assetWriterAudioInput = audioInput
        _adapter = adapter
        _captureState = .capturing
        _time = timestamp
                    
        writer.startWriting()
        writer.startSession(atSourceTime: .zero)
        
    case .capturing:
        
        if output == _videoOutput {
            if _assetWriterVideoInput?.isReadyForMoreMediaData == true {
                let time = CMTime(seconds: timestamp - _time, preferredTimescale: CMTimeScale(600))
                _adapter?.append(self.cvPixelBuffer, withPresentationTime: time)
            }
        } else if output == _audioOutput {
            if _assetWriterAudioInput?.isReadyForMoreMediaData == true {
                _assetWriterAudioInput?.append(sampleBuffer)
            }
        }
        break
        
    case .end:
        
        guard _assetWriterVideoInput?.isReadyForMoreMediaData == true, _assetWriter!.status != .failed else { break }
        
        _assetWriterVideoInput?.markAsFinished()
        _assetWriterAudioInput?.markAsFinished()
        _assetWriter?.finishWriting { [weak self] in
            
            guard let output = self?._assetWriter?.outputURL else { return }
            
            self?._captureState = .idle
            self?._assetWriter = nil
            self?._assetWriterVideoInput = nil
            self?._assetWriterAudioInput = nil
            
            
            self?.previewRecordedVideo(with: output)
        }
        
    default:
        break
    }
}

Start your timeline at the presentation timestamp of the first audio or video sample buffer that you encounter:在您遇到的第一个音频或视频样本缓冲区的演示时间戳处开始时间线:

writer.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer))

Previously you started the timeline at zero, but the captured sample buffers have timestamps that usually seem to be relative to amount of time passed since system boot, so there's a big, undesired duration between when your file "starts" ( sourceTime for AVAssetWriter ) and when video and audio appears.以前,你在零开始的时间线,但是所拍摄的样本缓冲器具有时间戳,通常似乎是相对的,因为系统启动时间的流逝量,所以有一个大的,当你的文件“开始”(之间不需要时间sourceTimeAVAssetWriter )和当视频和音频出现时。

Your question doesn't say that you don't see video, and I'd half expect some video players to skip over a big bunch of nothing to the point in the timeline where your samples begin, but in any case the file is wrong.你的问题并不是说你不到视频,我希望一些视频播放器跳过一大堆东西到你的样本开始的时间线中的点,但无论如何文件是错误的.

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM