![](/img/trans.png)
[英]Record all sounds generated by my app in a audio file (not from mic)
[英]record sounds played by my iPhone app with audio units
今天我有很多關於iOS 和音頻單元的有趣的東西,並且發現了很多有用的資源(包括在內)。
首先,我對一些事情感到困惑:真的有必要用混音器單元創建一個音頻圖來記錄應用程序播放的聲音嗎?
或者使用ObjectAL播放聲音(或更簡單的 AVAudioPlayer 調用)並創建一個遠程 io 單元並通過錄音回調在正確的總線上尋址就足夠了?
其次,一個更程序化的問題! 由於我對Audio Units的概念還不是很滿意,所以我嘗試調整Apple Mixer Host 項目,使其能夠錄制結果混音。 顯然,我嘗試使用Michael Tyson RemoteIO 帖子來做到這一點。
我在我的回調 function 上得到一個 EXC_BAD_ACCESS:
static OSStatus recordingCallback (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioBufferList *bufferList; // <- Fill this up with buffers (you will want to malloc it, as it's a dynamic-length list)
EffectState *effectState = (EffectState *)inRefCon;
AudioUnit rioUnit = effectState->rioUnit;
OSStatus status;
// BELOW I GET THE ERROR
status = AudioUnitRender(rioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
bufferList);
if (noErr != status) { NSLog(@"AudioUnitRender error"); return noErr;}
// Now, we have the samples we just read sitting in buffers in bufferList
//ExtAudioFileWriteAsync(effectState->audioFileRef, inNumberFrames, bufferList);
return noErr;
}
在使用回調 function 之前我在MixerHostAudio.h
typedef struct {
AudioUnit rioUnit;
ExtAudioFileRef audioFileRef;
} EffectState;
並在界面中創建:
AudioUnit iOUnit;
EffectState effectState;
AudioStreamBasicDescription iOStreamFormat;
...
@property AudioUnit iOUnit;
@property (readwrite) AudioStreamBasicDescription iOStreamFormat;
然后在實現文件MixerHostAudio.h 中:
#define kOutputBus 0
#define kInputBus 1
...
@synthesize iOUnit; // the Remote IO unit
...
result = AUGraphNodeInfo (
processingGraph,
iONode,
NULL,
&iOUnit
);
if (noErr != result) {[self printErrorMessage: @"AUGraphNodeInfo" withStatus: result]; return;}
// Enable IO for recording
UInt32 flag = 1;
result = AudioUnitSetProperty(iOUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
// Describe format
iOStreamFormat.mSampleRate = 44100.00;
iOStreamFormat.mFormatID = kAudioFormatLinearPCM;
iOStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
iOStreamFormat.mFramesPerPacket = 1;
iOStreamFormat.mChannelsPerFrame = 1;
iOStreamFormat.mBitsPerChannel = 16;
iOStreamFormat.mBytesPerPacket = 2;
iOStreamFormat.mBytesPerFrame = 2;
// Apply format
result = AudioUnitSetProperty(iOUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&iOStreamFormat,
sizeof(iOStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
result = AudioUnitSetProperty(iOUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&iOStreamFormat,
sizeof(iOStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
effectState.rioUnit = iOUnit;
// Set input callback ----> RECORDING
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
result = AudioUnitSetProperty(iOUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
但我不知道出了什么問題,也不知道如何挖掘。 注意:存在EffectState結構是因為我還嘗試集成BioAudio 項目從緩沖區寫入文件的能力。
第三,我想知道是否有更容易的方法來錄制我的 iPhone 應用程序播放的聲音(即不包括麥克風)?
自己找的。 我忘了像這樣鏈接:
callbackStruct.inputProcRefCon = &effectState;
這是代碼部分。 現在我又遇到了概念問題......
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.