简体   繁体   English

将Objective-C AudioUnits简介转换为Swift

[英]Translate Objective - C Introduction to AudioUnits into Swift

I alredy managed to translate this code so the render callback gets called: http://www.cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html 我设法翻译了这段代码,因此渲染回调被调用: http : //www.cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html

I'm sure that my render callback method is not right implemented, because I either get no sound at all or I get pretty awful noise from my headphones. 我确定我的渲染回调方法未正确实现,因为我要么根本听不到声音,要么从耳机中听到非常刺耳的声音。 I also don't see a connection between my audioSession in viewDidLoad and the rest of the code. 我也看不到viewDidLoad中的audioSession与其余代码之间的连接。

Is there anyone who can help me out with this? 有谁可以帮我这个忙吗?

private func performRender(
inRefCon: UnsafeMutablePointer<Void>,
ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBufNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>) -> OSStatus
{

// get object
let vc = unsafeBitCast(inRefCon, ViewController.self)
print("callback")

let thetaIncrement = 2.0 * M_PI * vc.kFrequency / vc.kSampleRate
var theta = vc.theta

//    var sinValues = [Int32]()
let amplitude : Double = 0.25

let abl = UnsafeMutableAudioBufferListPointer(ioData)
    for buffer in abl
    {
        let val : Int32 = Int32((sin(theta) * amplitude))
    //        sinValues.append(val)
        theta += thetaIncrement

        memset(buffer.mData, val, Int(buffer.mDataByteSize))
    }

vc.theta = theta

return noErr
}

class ViewController: UIViewController
{
let kSampleRate : Float64 = 44100
let kFrequency : Double = 440
var theta : Double = 0

private var toneUnit = AudioUnit()
private let kInputBus = AudioUnitElement(1)
private let kOutputBus = AudioUnitElement(0)

@IBAction func tooglePlay(sender: UIButton)
{
    if(toneUnit != nil)
    {
        AudioOutputUnitStop(toneUnit)
        AudioUnitInitialize(toneUnit)
        AudioComponentInstanceDispose(toneUnit)
        toneUnit = nil
    }
    else
    {
        createToneUnit()
        var err = AudioUnitInitialize(toneUnit)
        assert(err == noErr, "error initializing audiounit!")
        err = AudioOutputUnitStart(toneUnit)
        assert(err == noErr, "error starting audiooutput unit!")      
    }
}

func createToneUnit()
{
    var defaultOutputDescription = AudioComponentDescription(
        componentType: kAudioUnitType_Output,
        componentSubType: kAudioUnitSubType_RemoteIO,
        componentManufacturer: kAudioUnitManufacturer_Apple,
        componentFlags: 0,
        componentFlagsMask: 0)

    let defaultOutput = AudioComponentFindNext(nil,&defaultOutputDescription)


    let fourBytesPerFloat : UInt32 = 4
    let eightBitsPerByte : UInt32 = 8

    var err = AudioComponentInstanceNew(defaultOutput, &toneUnit)
    assert(err == noErr, "error setting audio component instance!")
    var input = AURenderCallbackStruct(inputProc: performRender,     inputProcRefCon: UnsafeMutablePointer(unsafeAddressOf(self)))

    err = AudioUnitSetProperty(toneUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &input, UInt32(sizeof(AURenderCallbackStruct)))
    assert(err == noErr, "error setting render callback!")

    var streamFormat = AudioStreamBasicDescription(
        mSampleRate: kSampleRate,
        mFormatID: kAudioFormatLinearPCM,
        mFormatFlags: kAudioFormatFlagsNativeFloatPacked,
        mBytesPerPacket: fourBytesPerFloat,
        mFramesPerPacket: 1,
        mBytesPerFrame: fourBytesPerFloat,
        mChannelsPerFrame: 1,
        mBitsPerChannel: fourBytesPerFloat*eightBitsPerByte,
        mReserved: 0)

    err = AudioUnitSetProperty(toneUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, &streamFormat, UInt32(sizeof(AudioStreamBasicDescription)))
    assert(err == noErr, "error setting audiounit property!")
}

override func viewDidLoad()
{
    super.viewDidLoad()
    let audioSession = AVAudioSession.sharedInstance()

    do
    {
        try audioSession.setCategory(AVAudioSessionCategoryPlayback)
    }
    catch
    {
        print("Audio session setCategory failed")
    }

    do
    {
        try audioSession.setPreferredSampleRate(kSampleRate)
    }
    catch
    {
        print("Audio session samplerate error")
    }

    do
    {
        try audioSession.setPreferredIOBufferDuration(0.005)
    }
    catch
    {
        print("Audio session bufferduration error")
    }

    do
    {
        try audioSession.setActive(true)
    }
    catch
    {
        print("Audio session activate failure")
    }
}
  • vc.theta isn't being incremented vc.theta没有增加
  • memset only takes a byte's worth of val memset只需要一个字节的val
  • the AudioUnit expects float s, but you're storing Int32 s AudioUnit需要float ,但您要存储Int32
  • the range of the audio data looks funny too - why not keep it in the range [-1, 1]? 音频数据的范围看起来也很有趣-为什么不将其保持在[-1,1]范围内?
  • there's no need to constrain theta either, sin can do that fine. 也不需要约束thetasin可以做到的很好。

Are you sure this used to work in objective-c? 您确定这曾经在Objective-C中起作用吗?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM