[英]How to access samples in an audio file
我正在制作一款iPhone應用程序,讓用戶可以設計一個音頻過濾器並在一些錄制的聲音上進行測試。 我嘗試執行以下操作:
問題如下:只要我將數據從一個緩沖區復制到另一個緩沖區然后將其寫入第二個音頻文件,一切正常。 但是一旦我嘗試對樣本執行任何類型的操作(比如將它們除以2),結果就是隨機噪聲。 這讓我懷疑我沒有正確地解釋音頻數據的值,但我現在已經嘗試了五天,而我卻沒有得到它。 如果你知道如何訪問和操作單個音頻樣本,請幫助我,我會非常感激! 謝謝!
這是稍后將執行過濾的代碼(現在它應該將所有音頻樣本除以2);
OSStatus status = noErr;
UInt32 propertySizeDataPacketCount;
UInt32 writabilityDataPacketCount;
UInt32 numberOfPackets;
UInt32 propertySizeMaxPacketSize;
UInt32 writabilityMaxPacketSize;
UInt32 maxPacketSize;
UInt32 numberOfBytesRead;
UInt32 numberOfBytesToWrite;
UInt32 propertySizeDataByteCount;
SInt64 currentPacket;
double x0;
double x1;
status = AudioFileOpenURL(audioFiles->recordedFile,
kAudioFileReadPermission,
kAudioFileAIFFType,
&audioFiles->inputFile);
status = AudioFileOpenURL(audioFiles->filteredFile,
kAudioFileReadWritePermission,
kAudioFileAIFFType,
&audioFiles->outputFile);
status = AudioFileGetPropertyInfo(audioFiles->inputFile,
kAudioFilePropertyAudioDataPacketCount,
&propertySizeDataPacketCount,
&writabilityDataPacketCount);
status = AudioFileGetProperty(audioFiles->inputFile,
kAudioFilePropertyAudioDataPacketCount,
&propertySizeDataPacketCount,
&numberOfPackets);
status = AudioFileGetPropertyInfo (audioFiles->inputFile,
kAudioFilePropertyMaximumPacketSize,
&propertySizeMaxPacketSize,
&writabilityMaxPacketSize);
status = AudioFileGetProperty(audioFiles->inputFile,
kAudioFilePropertyMaximumPacketSize,
&propertySizeMaxPacketSize,
&maxPacketSize);
SInt16 *inputBuffer = (SInt16 *)malloc(numberOfPackets * maxPacketSize);
SInt16 *outputBuffer = (SInt16 *)malloc(numberOfPackets * maxPacketSize);
currentPacket = 0;
status = AudioFileReadPackets(audioFiles->inputFile,
false, &numberOfBytesRead,
NULL,
currentPacket,
&numberOfPackets,
inputBuffer);
for (int i = 0; i < numberOfPackets; i++) {
x0 = (double)inputBuffer[i];
x1 = 0.5 * x0; //This is supposed to reduce the value of the sample by half
//x1 = x0; //This just copies the value of the sample and works fine
outputBuffer[i] = (SInt16)x1;
}
numberOfBytesToWrite = numberOfBytesRead;
currentPacket = 0;
status = AudioFileWritePackets(audioFiles->outputFile,
false,
numberOfBytesToWrite,
NULL,
currentPacket,
&numberOfPackets,
outputBuffer);
status = AudioFileClose(audioFiles->inputFile);
status = AudioFileClose(audioFiles->outputFile);
為了創建音頻文件,我使用以下代碼:
#import "AudioFiles.h"
#define SAMPLE_RATE 44100
#define FRAMES_PER_PACKET 1
#define CHANNELS_PER_FRAME 1
#define BYTES_PER_FRAME 2
#define BYTES_PER_PACKET 2
#define BITS_PER_CHANNEL 16
@implementation AudioFiles
-(void)setupAudioFormat:(AudioStreamBasicDescription *)format {
format->mSampleRate = SAMPLE_RATE;
format->mFormatID = kAudioFormatLinearPCM;
format->mFramesPerPacket = FRAMES_PER_PACKET;
format->mChannelsPerFrame = CHANNELS_PER_FRAME;
format->mBytesPerFrame = BYTES_PER_FRAME;
format->mBytesPerPacket = BYTES_PER_PACKET;
format->mBitsPerChannel = BITS_PER_CHANNEL;
format->mReserved = 0;
format->mFormatFlags = kLinearPCMFormatFlagIsBigEndian |
kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
}
- (id)init
{
self = [super init];
if (self) {
char path[256];
NSArray *dirPaths;
NSString *docsDir;
dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
docsDir = [dirPaths objectAtIndex:0];
NSString *recordedFilePath = [docsDir stringByAppendingPathComponent:@"/recordedAudio.aiff"];
[recordedFilePath getCString:path maxLength:sizeof(path) encoding:NSUTF8StringEncoding];
recordedFile = CFURLCreateFromFileSystemRepresentation(NULL, (UInt8 *)path, strlen(path), false);
recordedFileURL = [NSURL fileURLWithPath:recordedFilePath];
dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
docsDir = [dirPaths objectAtIndex:0];
NSString *filteredFilePath = [docsDir stringByAppendingPathComponent:@"/filteredAudio.aiff"];
[filteredFilePath getCString:path maxLength:sizeof(path) encoding:NSUTF8StringEncoding];
filteredFile = CFURLCreateFromFileSystemRepresentation(NULL, (UInt8 *)path, strlen(path), false);
filteredFileURL = [NSURL fileURLWithPath:filteredFilePath];
AudioStreamBasicDescription audioFileFormat;
[self setupAudioFormat:&audioFileFormat];
OSStatus status = noErr;
status = AudioFileCreateWithURL(recordedFile,
kAudioFileAIFFType,
&audioFileFormat,
kAudioFileFlags_EraseFile,
&inputFile);
status = AudioFileCreateWithURL(filteredFile,
kAudioFileAIFFType,
&audioFileFormat,
kAudioFileFlags_EraseFile,
&outputFile);
}
return self;
}
@end
為了錄制我使用AVAudioRecorder進行以下設置:
NSDictionary *recordSettings =
[[NSDictionary alloc] initWithObjectsAndKeys:
[NSNumber numberWithFloat: 8000.0], AVSampleRateKey,
[NSNumber numberWithInt: kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithInt: 1], AVNumberOfChannelsKey,
[NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey,
[NSNumber numberWithInt:16], AVEncoderBitRateKey,
[NSNumber numberWithBool:YES],AVLinearPCMIsBigEndianKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:YES], AVLinearPCMIsNonInterleaved,
nil];
NSError *error = nil;
audioRecorder = [[AVAudioRecorder alloc] initWithURL:audioFiles->recordedFileURL settings:recordSettings error:&error];
if (error)
{
NSLog(@"error: %@", [error localizedDescription]);
} else {
[audioRecorder prepareToRecord];
}
您的輸入數據是BigEndian,但您假設它是LittleEndian。
解決這個問題的一種方法是:
SInt16 inVal = OSSwapBigToHostInt16(inputBuffer[i]);
SInt16 outVal = inVal / 2;
outputBuffer[i] = OSSwapHostToBigInt16(outVal);
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.