简体   繁体   English

如何在iPhone中使用AudioBuffer编写从麦克风本地录制的音频文件?

[英]How to write audio file locally recorded from microphone using AudioBuffer in iPhone?

I am new to Audio framework, anyone help me to write the audio file which is playing by capturing from microphone? 我是Audio框架的新手,有人帮我编写通过麦克风捕获播放的音频文件吗?

below is the code to play mic input through iphone speaker, now i would like to save the audio in iphone for future use. 下面是通过iphone扬声器播放麦克风输入的代码,现在我想将音频保存在iphone中以备将来使用。

i found the code from here to record audio using microphone http://www.stefanpopp.de/2011/capture-iphone-microphone/ 我发现这里的代码用麦克风录制音频http://www.stefanpopp.de/2011/capture-iphone-microphone/

/**

Code start from here for playing the recorded voice 

*/

static OSStatus playbackCallback(void *inRefCon, 
                                 AudioUnitRenderActionFlags *ioActionFlags, 
                                 const AudioTimeStamp *inTimeStamp, 
                                 UInt32 inBusNumber, 
                                 UInt32 inNumberFrames, 
                                 AudioBufferList *ioData) {    

    /**
     This is the reference to the object who owns the callback.
     */
    AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;

    // iterate over incoming stream an copy to output stream
    for (int i=0; i < ioData->mNumberBuffers; i++) { 
        AudioBuffer buffer = ioData->mBuffers[i];

        // find minimum size
        UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);

        // copy buffer to audio buffer which gets played after function return
        memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);

        // set data size
        buffer.mDataByteSize = size; 

         // get a pointer to the recorder struct variable
Recorder recInfo = audioProcessor.audioRecorder;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo.running) {
    audioErr = AudioFileWriteBytes (recInfo.recordFile,
                                    false,
                                    recInfo.inStartingByte,
                                    &size,
                                    &buffer.mData);
    assert (audioErr == noErr);
    // increment our byte count
    recInfo.inStartingByte += (SInt64)size;// size should be number of bytes
    audioProcessor.audioRecorder = recInfo;

     }
    }

    return noErr;
}

-(void)prepareAudioFileToRecord{ - (无效)prepareAudioFileToRecord {

NSArray *paths =             NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;

NSTimeInterval time = ([[NSDate date] timeIntervalSince1970]); // returned as a double
long digits = (long)time; // this is the first 10 digits
int decimalDigits = (int)(fmod(time, 1) * 1000); // this will get the 3 missing digits
//    long timestamp = (digits * 1000) + decimalDigits;
NSString *timeStampValue = [NSString stringWithFormat:@"%ld",digits];
//    NSString *timeStampValue = [NSString stringWithFormat:@"%ld.%d",digits ,decimalDigits];


NSString *fileName = [NSString stringWithFormat:@"test%@.caf",timeStampValue];
NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
NSURL *fileURL = [NSURL fileURLWithPath:filePath];
// modify the ASBD (see EDIT: towards the end of this post!)
audioFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;

// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
                                  kAudioFileCAFType,
                                  &audioFormat,
                                  kAudioFileFlags_EraseFile,
                                  &audioRecorder.recordFile);


assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
self.audioRecorder = audioRecorder;

} }

thanks in advance bala 在此提前感谢巴拉

To write the bytes from an AudioBuffer to a file locally we need the help from the AudioFileServices link class which is included in the AudioToolbox framework. 写从AudioBuffer字节的文件在本地,我们需要从AudioFileServices帮助链接包含在该AudioToolbox框架类。

Conceptually we will do the following - set up an audio file and maintain a reference to it ( we need this reference to be accessible from the render callback that you included in your post ). 从概念上讲,我们将执行以下操作 - 设置音频文件并维护对它的引用( 我们需要此引用可以从您在帖子中包含的渲染回调中访问 )。 We also need to keep track of the number of bytes that are written for each time the callback is called. 我们还需要跟踪每次调用回调时写入的字节数。 Finally a flag to check that will let us know to stop writing to file and close the file. 最后一个标志来检查,让我们知道停止写入文件并关闭文件。

Because the code in the link you provided declares an AudioStreamBasicDescription which is LPCM and hence constant bit rate, we can use the AudioFileWriteBytes function ( writing compressed audio is more involved and would use AudioFileWritePackets function instead ). 因为您提供的链接中的代码声明了一个AudioStreamBasicDescription ,它是LPCM,因此是恒定的比特率,我们可以使用AudioFileWriteBytes函数( 编写压缩音频更复杂,并使用AudioFileWritePackets函数 )。

Let's start by declaring a custom struct ( which contains all the extra data we'll need ) and adding an instance variable of this custom struct and also making a property that points to the struct variable. 让我们首先声明一个自定义结构( 包含我们需要的所有额外数据 ),并添加这个自定义结构的实例变量,并创建一个指向结构变量的属性。 We'll add this to the AudioProcessor custom class, as you already have access to this object from within the callback where you typecast in this line. 我们将此添加到AudioProcessor自定义类,因为您已经可以在此行中进行类型转换的回调中访问此对象。

AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;

Add this to AudioProcessor.h (above the @interface) 将其添加到AudioProcessor.h (@interface上方)

typedef struct Recorder {
AudioFileID recordFile;
SInt64 inStartingByte;
Boolean running;
} Recorder;

Now let's add an instance variable and also make it a pointer property and assign it to the instance variable ( so we can access it from within the callback function ). 现在让我们添加一个实例变量,并将其作为指针属性并将其分配给实例变量( 因此我们可以从回调函数中访问它 )。 In the @interface add an instance variable named audioRecorder and also make the ASBD available to the class. 在@interface中添加一个名为audioRecorder的实例变量,并使ASBD可用于该类。

Recorder audioRecorder;
AudioStreamBasicDescription recordFormat;// assign this ivar to where the asbd is created in the class

In the method -(void)initializeAudio comment out or delete this line as we have made recordFormat an ivar. 在方法- (void)initializeAudio注释掉或删除这一行,因为我们已经使recordFormat成为一个ivar。

//AudioStreamBasicDescription recordFormat;

Now add the kAudioFormatFlagIsBigEndian format flag to where the ASBD is set up. 现在将kAudioFormatFlagIsBigEndian格式标志添加到ASBD的设置位置。

// also modify the ASBD in the AudioProcessor classes -(void)initializeAudio method (see EDIT: towards the end of this post!)
    recordFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;

And finally add it as a property that is a pointer to the audioRecorder instance variable and don't forget to synthesise it in AudioProcessor.m . 最后将其添加为属性,该属性是指向audioRecorder实例变量的指针,并且不要忘记在AudioProcessor.m中合成它。 We will name the pointer property audioRecorderPointer 我们将命名属性audioRecorderPointer命名

@property Recorder *audioRecorderPointer;

// in .m synthesise the property
@synthesize audioRecorderPointer;

Now let's assign the pointer to the ivar (this could be placed in the -(void)initializeAudio method of the AudioProcessor class) 现在让我们将指针分配给ivar(这可以放在AudioProcessor类的- (void)initializeAudio方法中)

// ASSIGN POINTER PROPERTY TO IVAR
self.audioRecorderPointer = &audioRecorder;

Now in the AudioProcessor.m let's add a method to setup the file and open it so we can write to it. 现在在AudioProcessor.m中,让我们添加一个方法来设置文件并打开它,以便我们可以写入它。 This should be called before you start the AUGraph running. 这应该在你开始运行AUGraph之前调用。

-(void)prepareAudioFileToRecord {
// lets set up a test file in the documents directory
    NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
    NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
    NSString *fileName = @"test_recording.aif";
    NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
    NSURL *fileURL = [NSURL fileURLWithPath:filePath];

// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
                                  kAudioFileAIFFType,
                                  recordFormat,
                                  kAudioFileFlags_EraseFile,
                                  &audioRecorder.recordFile);
assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
}

Okay, we are nearly there. 好的,我们快到了。 Now we have a file to write to, and an AudioFileID that can be accessed from the render callback. 现在我们有一个要写入的文件,以及一个可以从渲染回调中访问的AudioFileID So inside the callback function you posted add the following right before you return noErr at the end of the method. 所以在你发布的回调函数中,在方法结束时返回noErr之前添加以下内容。

// get a pointer to the recorder struct instance variable
Recorder *recInfo = audioProcessor.audioRecorderPointer;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo->running) {
audioErr = AudioFileWriteBytes (recInfo->recordFile,
                                false,
                                recInfo->inStartingByte,
                                &size,
                                buffer.mData);
assert (audioErr == noErr);
// increment our byte count
recInfo->inStartingByte += (SInt64)size;// size should be number of bytes
}

When we want to stop recording ( probably invoked by some user action ), simply make the running boolean false and close the file like this somewhere in the AudioProcessor class. 当我们想要停止录制( 可能由某些用户操作调用 )时,只需将运行布尔值设为false并在AudioProcessor类中的某个位置关闭这样的文件。

audioRecorder.running = false;
OSStatus audioErr = AudioFileClose(audioRecorder.recordFile);
assert (audioErr == noErr);

EDIT: the endianness of the samples need to be big endian for the file so add the kAudioFormatFlagIsBigEndian bit mask flag to the ASBD in the source code found at the link provided in question. 编辑:样本的字节顺序需要为文件的大端,因此将kAudioFormatFlagIsBigEndian位掩码标志添加到在提供的链接中找到的源代码中的ASBD。

For extra info about this topic the Apple documents are a great resource and I also recommend reading 'Learning Core Audio' by Chris Adamson and Kevin Avila (of which I own a copy). 有关此主题的额外信息,Apple文档是一个很好的资源,我还建议阅读Chris Adamson和Kevin Avila的“学习核心音频”(我拥有一份副本)。

Use Audio Queue Services. 使用音频队列服务。

There is an example in the Apple documentation that does exactly what you ask: Apple文档中有一个示例可以完全满足您的要求:

Audio Queue Services Programming Guide - Recording Audio 音频队列服务编程指南 - 录制音频

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 在iOS中比较两个音频(本地存储的预先录制的语音命令,并从应用程序中的麦克风录制) - Compare Two Audio(locally stored pre-recorded voice command and recorded from microphone in app) in iOS 使用Xamarin从iPhone中的麦克风流式传输音频 - Streaming audio from microphone in iPhone using Xamarin 如何从NSdata创建AudioBuffer(音频) - How to create AudioBuffer(Audio) from NSdata Windows Phone 8中使用MicrophoneRecorder录制的音频文件无法在iPhone上播放 - Audio file recorded using MicrophoneRecorder in windows phone 8 not playing on iPhone 如何从POVoiceHUD播放录制的音频文件 - How to play recorded audio file from POVoiceHUD 如何将样本写入Core Audio中的AudioBuffer? - How can I write samples to an AudioBuffer in Core Audio? 多路连接,用于从iphone麦克风流式传输音频 - multipeer connectivity for streaming audio from iphone microphone 将麦克风音频从iPhone流传输到外部服务器 - Streaming microphone audio from iPhone to outside server 如何在目标C(iPhone)中保存录制的音频 - How to save a recorded audio in objective C (iPhone) 从麦克风获取音频并将其写入iOS上的套接字 - Get audio from microphone and write it to socket on iOS
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM