简体   繁体   English

在iOS上没有用户交互的情况下更改WebAudioAPI中的声音

[英]Change the sound in WebAudioAPI with no user interaction on iOS

I'm using this function to create a sound, which works well on desktop and Android, and works initially on iOS when I use a touchevent to start it. 我正在使用此功能来创建声音,这在桌面和Android上运行良好,并且当我使用touchevent启动它时,最初在iOS上工作。 I need to later replace the sound with another sound file, however on iOS it doesn't start - I'm assuming because it needs another user interaction to play the sound. 我需要稍后用另一个声音文件替换声音,但是在iOS上它没有启动 - 我假设因为它需要另一个用户交互来播放声音。

This is a VR app in a headset so this kind of user interaction isn't possible. 这是耳机中的VR应用程序,因此无法进行此类用户交互。 Is there another way of replacing the sound or another non-click user interaction I can use like movement? 有没有其他方法可以替换声音或其他非点击用户交互,我可以像运动一样使用?

I've seen this http://matt-harrison.com/perfect-web-audio-on-ios-devices-with-the-web-audio-api/ 我见过这个http://matt-harrison.com/perfect-web-audio-on-ios-devices-with-the-web-audio-api/

Which seems to have another solution, but I don't want to pre-load all of the files (they're reasonably big and there's 10 of them) which seems to be a requirement here - plus I use the pause function in the code I have. 这似乎有另一个解决方案,但我不想预加载所有文件(它们相当大,其中有10个)这似乎是一个要求 - 加上我在代码中使用暂停功能我有。 Are there any easy ways round this? 这有什么简单的方法吗?

var AudioContext = AudioContext || webkitAudioContext, context = new AudioContext();

function createSound(filename) {
console.log('createSound()');

var url = cdnPrefix + '/' + filename;
var buffer;


context = new AudioContext();

var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';

// Decode asynchronously
request.onload = function() {
    context.decodeAudioData(request.response, function(b) {
        buffer = b;
        play();
    });
}
request.send();


var sourceNode = null,
    startedAt = 0,
    pausedAt = 0,
    playing = false,
    volume = context.createGain();

var play = function() {

    if(playing || !buffer)
        return;

    var offset = pausedAt;

    sourceNode = context.createBufferSource();
    sourceNode.connect(context.destination);
    sourceNode.connect(volume);
    volume.gain.value = 1;

    sourceNode.buffer = buffer;
    sourceNode.start(0, offset);
    sourceNode.onended = onEnded;

    sourceNode.onstatechange = onStateChange;
    sourceNode.onloaded = onLoaded;
    //sourceNode.loop = true;
    startedAt = context.currentTime - offset;
    pausedAt = 0;
    playing = true;
    $(document).trigger("voiceoverPlay");

    if(isPaused == true)
        pause();
};

function onEnded(event){
    $(document).trigger("voiceoverEnded");
    play();
}

function onStateChange(event){
    console.log('onStateChange',event);
}

function onLoaded(event){
    console.log('onLoaded',event);
}


var pause = function() {
    var elapsed = context.currentTime - startedAt;
    stop();
    pausedAt = elapsed;
    $(document).trigger("voiceoverPause");
};

var stop = function() {
    if (sourceNode) {
        sourceNode.disconnect();
        if(playing === true)
            sourceNode.stop(0);
        sourceNode = null;
    }
    pausedAt = 0;
    startedAt = 0;
    playing = false;
};

var getPlaying = function() {
    return playing;
};

var getCurrentTime = function() {
    if(pausedAt) {
        return pausedAt;
    }
    if(startedAt) {
        return context.currentTime - startedAt;
    }
    return 0;
};

var setCurrentTime = function(time) {
    pausedAt = time;
};

var getDuration = function() {
    return buffer.duration;
};

return {
    getCurrentTime: getCurrentTime,
    setCurrentTime: setCurrentTime,
    getDuration: getDuration,
    getPlaying: getPlaying,
    play: play,
    pause: pause,
    stop: stop
};

} }

You need a touch event for each sound. 每个声音都需要一个触摸事件。

I ended up using SoundJS which is much better. 我最终使用的SoundJS要好得多。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM