简体   繁体   中英

iOS Audio Mixing

When mixing two audio files, this piece of code shows one sound is treated as stereo sound and the other one is treated as mono, why is that? Why can't both of them are treated as stereo?

 @property (readwrite)           AudioStreamBasicDescription stereoStreamFormat;
 @property (readwrite)           AudioStreamBasicDescription monoStreamFormat;

The checking of audio file is such.

 if ((inputDataFormat.mFormatID == kAudioFormatLinearPCM) &&
    (inputDataFormat.mSampleRate == 44100.0) &&
    (inputDataFormat.mChannelsPerFrame == 2) &&
    (inputDataFormat.mChannelsPerFrame == 2) &&
    (inputDataFormat.mBitsPerChannel == 16) &&
    (inputDataFormat.mFormatFlags == (kAudioFormatFlagsNativeEndian |
                                      kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger))
    ) {
    // no-op when the expected data format is found
} else {
    status = kAudioFileUnsupportedFileTypeError;
    goto reterr;
}

Why is the no-op condition triggered when that data format is encountered?

When mix two audioes, this piece of code shows one sound is treated as stereo sound and the other one is treated as mono, why is that?

That's an implementation detail of whatever library it is you're using.

Why can't both of them are treated as stereo?

Well, they certainly could, if you wrote the supporting code to duplicate the signals'/streams' data . Normally, you would not do this and instead preserve the distinction (as the original author has done), so your render chain would not double rendering or file sizes.

Why is the no-op condition existed when that data format is encountered?

That's just the writer's written style. That expression could be written several ways.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM