简体   繁体   中英

AVAssetWriterInput possible for live audio through Core Audio?

I am looking to adapt AVFoundation to do something that seems like it should be possible, but I cannot find any support or examples anywhere for my scenario.

I need to grab video from the front camera and combine that with audio that I have coming from Core Audio.

I have code working that solves the common case of grabbing video from the camera and combining that with audio from the microphone, and it works great. This is mostly adapted from the RosyWriter Apple sample code.

However I cannot find any way to use the live stream of audio coming out of Core Audio, create an AVAssetWriterInput out of it, and add it as an input to my AVCaptureSession. All resources I find having to do with setting up AVCaptureInput and AVAssetWriterInput revolves around initializing them with devices and grabbing the media from the devices in real-time -- however I'm not trying to get audio from a device.

Is there a way to create a AVCaptureInput, tell it to expect data in a certain ASBD format, and then give it that data from my Core Audio callbacks? I don't want to have to write data to disk and then read it from disk -- I suspect that would be very slow. It seems there should be a solution but I cannot find one.

Suffice to say I have code that creates CMSampleBuffers out of the AudioBufferList objects I use to contain audio. I've inspected the CMSampleBuffers and they seem to contain valid frames of data, but when I send that data back to my modified RosyWriterViewProcessor "writeSampleBuffer:ofType:" it seems to write properly (I get no errors) but when I open up the video file when it's done I only see video and don't hear any audio.

Does anyone have any tips on how to accomplish what I'm trying to do?

Here is the standard ASBD I've been using throughout:

AudioStreamBasicDescription audioDescription;
memset(&audioDescription, 0, sizeof(audioDescription));
audioDescription.mFormatID          = kAudioFormatLinearPCM;
audioDescription.mFormatFlags       = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian;
audioDescription.mChannelsPerFrame  = 2;
audioDescription.mBytesPerPacket    = sizeof(SInt16)*audioDescription.mChannelsPerFrame;
audioDescription.mFramesPerPacket   = 1;
audioDescription.mBytesPerFrame     = sizeof(SInt16)*audioDescription.mChannelsPerFrame;
audioDescription.mBitsPerChannel    = 8 * sizeof(SInt16);
audioDescription.mSampleRate        = 44100.0;

Barring a solution, I have separate video and audio files that I think I could patch together using AVComposition, but I'd rather not go that route since my video and audio files regularly have different lengths and I don't want to battle stretching out one track or another just to fit them together -- it may not even end up being synchronized! I'd rather set up everything in a AVCaptureSession and have AVFoundation do the hard work of interleaving everything for me.

尝试使用原始PCM格式创建有效的资产编写器输入,但在回调中,丢弃该输入中的数据,并从音频单元创建的输出中保存的缓冲区中替换相等长度的数据。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM