简体   繁体   中英

iOS Core Audio render callback works on simulator, not on device

My callback looks like this:

static OSStatus renderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags,                          const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
    AudioSampleType *outBuffer = (AudioSampleType *)ioData->mBuffers[0].mData;
    memset(outBuffer, 0, sizeof(AudioSampleType)*inNumberFrames*kNumChannels);      

    //copy a sine wave into outBuffer
    double max_aust = pow(2.f, (float)(sizeof(AudioSampleType)*8.0 - 1.f)) - 1.0;
    for(int i = 0; i < inNumberFrames; i++) {
        SInt16 val = (SInt16) (gSine2_[(int)phase2_] * max_aust);
        outBuffer[2*i] = outBuffer[2*i+1] = (AudioSampleType)val;
        phase2_ += inc2_;
        if(phase2_ > 1024) phase2_ -= 1024;
    }

    return noErr;
}

This is a super basic render callback that should just play a sine wave. It does on the simulator, it does NOT on the device. In fact, I can get no audio from the device. Even if I add a printf to check outBuffer, it shows that outBuffer is filled with samples of a sine wave.

I'm setting the session type to Ambiet, but I've tried playAndRecord and MediaPlayback as well. No luck with either. My preferred framesPerBuffer is 1024 (which is what I get on the simulator and device). My sample rate is 44100hz. I've tried 48000 as well just in case. I've also tried changing the framesPerBuffer.

Are there any other reasons that the samples would not reach the hardware on the device?

UPDATE: I just found out that if I plug my headphones into the device I hear what sounds like a sine wave that is clipping really horribly. This made me think that possibly the device was expecting floating point instead of signed int, but when I changed the values to -1 to 1 there's just no audio (device or simulator, as expected since the engine is set to accept signed int, not floating point).

I can't tell for sure without seeing more of your setup, but it sounds very much like you're getting bitten by the difference between AudioSampleType (SInt16 samples) and AudioUnitSampleType (fixed 8.24 samples inside of a SInt32 container). It's almost certainly the case that AudioUnitSampleType is the format expected in your callback. This post on the Core Audio mailing list does a very good job explaining the difference between the two, and why they exist.

Because I don't know how is your setup I suggest to read this: http://www.cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html

The sample code is for a mono tone generator, if you want stereo fill the second channel too.

The pointer to second channel buffer is

const int secondChannel = 1;
Float32 *bufferSecondChannel = (Float32 *)ioData->mBuffers[secondChannel].mData;

Hope this help

You may need to setup the audio session (initialize, set category and activate it)

OSStatus activationResult = NULL;
result = AudioSessionSetActive (true);

More at: http://developer.apple.com/library/ios/#documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Cookbook/Cookbook.html

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM