[英]Recording live Audio Streams on iOS
//Declare string for application temp path and tack on the file extension
string fileName = string.Format ("Myfile{0}.wav", DateTime.Now.ToString ("yyyyMMddHHmmss"));
string audioFilePath = Path.Combine (Path.GetTempPath (), fileName);
Console.WriteLine("Audio File Path: " + audioFilePath);
url = NSUrl.FromFilename(audioFilePath);
//set up the NSObject Array of values that will be combined with the keys to make the NSDictionary
NSObject[] values = new NSObject[]
{
NSNumber.FromFloat (44100.0f), //Sample Rate
NSNumber.FromInt32 ((int)AudioToolbox.AudioFormatType.LinearPCM), //AVFormat
NSNumber.FromInt32 (2), //Channels
NSNumber.FromInt32 (16), //PCMBitDepth
NSNumber.FromBoolean (false), //IsBigEndianKey
NSNumber.FromBoolean (false) //IsFloatKey
};
//Set up the NSObject Array of keys that will be combined with the values to make the NSDictionary
NSObject[] keys = new NSObject[]
{
AVAudioSettings.AVSampleRateKey,
AVAudioSettings.AVFormatIDKey,
AVAudioSettings.AVNumberOfChannelsKey,
AVAudioSettings.AVLinearPCMBitDepthKey,
AVAudioSettings.AVLinearPCMIsBigEndianKey,
AVAudioSettings.AVLinearPCMIsFloatKey
};
//Set Settings with the Values and Keys to create the NSDictionary
settings = NSDictionary.FromObjectsAndKeys (values, keys);
//Set recorder parameters
recorder = AVAudioRecorder.Create(url, new AudioSettings(settings), out error);
//Set Recorder to Prepare To Record
recorder.PrepareToRecord();
該代碼效果很好,但是如何直接保留麥克風中的錄音以進行流式傳輸呢? 我在互聯網上找不到任何信息,希望您能對我有所幫助
您正在尋找對音頻流(記錄或回放)的緩沖訪問,iOS通過Audio Queue Services
( AVAudioRecorder
級別太高)提供了Audio Queue Services
AVAudioRecorder
,因此當音頻緩沖區被填充時,iOS會使用隊列中的填充緩沖區來調用callback
,您可以對其進行處理(將其保存到磁盤,將其寫入基於C#的Stream,發送到回放音頻隊列[speakers]等),然后通常將其放回隊列以供重用。
這樣的事情開始記錄到音頻緩沖區的queue
中:
var recordFormat = new AudioStreamBasicDescription() {
SampleRate = 8000,
Format = AudioFormatType.LinearPCM,
FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger | AudioFormatFlags.LinearPCMIsPacked,
FramesPerPacket = 1,
ChannelsPerFrame = 1,
BitsPerChannel = 16,
BytesPerPacket = 2,
BytesPerFrame = 2,
Reserved = 0
};
recorder = new InputAudioQueue (recordFormat);
for (int count = 0; count < BufferCount; count++) {
IntPtr bufferPointer;
recorder.AllocateBuffer(AudioBufferSize, out bufferPointer);
recorder.EnqueueBuffer(bufferPointer, AudioBufferSize, null);
}
recorder.InputCompleted += HandleInputCompleted;
recorder.Start ();
因此,在此示例中,假設AudioBufferSize
為8k, BufferCount
為3,則一旦三個緩沖區中的第一個被填滿,我們的處理程序HandleInputCompleted
被調用(因為queue
仍有2個緩沖區,記錄仍繼續到它們。
我們的InputCompleted
處理程序:
private void HandleInputCompleted (object sender, InputCompletedEventArgs e)
{
// We received a new buffer of audio, do something with it....
// Some unsafe code will be required to rip the buffer...
// Place the buffer back into the queue so iOS knows you are done with it
recorder.EnqueueBuffer(e.IntPtrBuffer, AudioBufferSize, null);
// At some point you need to call `recorder.Stop();` ;-)
}
(我從處理程序中剝離了代碼,因為它是一個自定義的音頻2文本學習中立網絡,因為我們在非常大的隊列中使用很小的緩沖區來減少反饋延遲並在單個TCP / UDP數據包中將音頻數據加載到雲中處理(想想Siri
;-)
在此處理程序,你可以訪問Pointer
到通過目前被填充的緩沖器InputCompletedEventArgs.IntPtrBuffer
,使用該指針,你可以peek
在緩沖區中的每個字節, poke
到您的C#為基礎的流,如果這是你的目標。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.