简体   繁体   English

如何使用HTML5 Audio API播放从XMLHTTPRequest返回的音频

[英]How do I play audio returned from an XMLHTTPRequest using the HTML5 Audio API

I'm failing to be able to play audio when making an "AJAX" request to my server side api. 在向服务器端API发出“AJAX”请求时,我无法播放音频。

I have backend Node.js code that's using IBM's Watson Text-to-Speech service to serve audio from text: 我有后端Node.js代码,它使用IBM的Watson Text-to-Speech服务来提供来自文本的音频:

var render = function(request, response) {
    var options = {
        text: request.params.text,
        voice: 'VoiceEnUsMichael',
        accept: 'audio/ogg; codecs=opus'
    };

    synthesizeAndRender(options, request, response);
};

var synthesizeAndRender = function(options, request, response) {
    var synthesizedSpeech = textToSpeech.synthesize(options);

    synthesizedSpeech.on('response', function(eventResponse) {
        if(request.params.text.download) {
            var contentDisposition = 'attachment; filename=transcript.ogg';

            eventResponse.headers['content-disposition'] = contentDisposition;
        }
    });

    synthesizedSpeech.pipe(response);
};

I have client side code to handle that: 我有客户端代码来处理:

var xhr = new XMLHttpRequest(),
    audioContext = new AudioContext(),
    source = audioContext.createBufferSource();

module.controllers.TextToSpeechController = {
    fetch: function() {
        xhr.onload = function() {
            var playAudio = function(buffer) {
                source.buffer = buffer;
                source.connect(audioContext.destination);

                source.start(0);
            };

            // TODO: Handle properly (exiquio)
            // NOTE: error is being received
            var handleError = function(error) {
                console.log('An audio decoding error occurred');
            }

            audioContext
                .decodeAudioData(xhr.response, playAudio, handleError);
        };
        xhr.onerror = function() { console.log('An error occurred'); };

        var urlBase = 'http://localhost:3001/api/v1/text_to_speech/';
        var url = [
            urlBase,
            'test',
        ].join('');

        xhr.open('GET', encodeURI(url), true);
        xhr.setRequestHeader('x-access-token', Application.token);
        xhr.responseType = 'arraybuffer';
        xhr.send();
    }
}

The backend returns the audio that I expect, but my success method, playAudio, is never called. 后端返回我期望的音频,但我的成功方法playAudio永远不会被调用。 Instead, handleError is always called and the error object is always null. 相反,始终调用handleError,并且错误对象始终为null。

Could anyone explain what I'm doing wrong and how to correct this? 任何人都可以解释我做错了什么以及如何纠正这个问题? It would be greatly appreciated. 这将不胜感激。

Thanks. 谢谢。

NOTE: The string "test" in the URL becomes a text param on the backend and and ends up in the options variable in synthesizeAndRender. 注意:URL中的字符串“test”将成为后端的文本参数,并最终位于synthesizeAndRender的options变量中。

Unfortunately, unlike Chrome's HTML5 Audio implementation, Chrome's Web Audio doesn't support audio/ogg;codecs=opus , which is what your request uses here. 遗憾的是,与Chrome的HTML5音频实施不同,Chrome的网络音频不支持audio / ogg; codecs = opus ,这是您的请求在此处使用的内容。 You need to set the format to audio/wav for this to work. 您需要将格式设置为audio/wav才能使其正常工作。 To be sure it's passed through to the server request, I suggest putting it in the query string ( accept=audio/wav , urlencoded). 为了确保它传递给服务器请求,我建议将它放在查询字符串中( accept=audio/wav ,urlencoded)。

Are you just looking to play the audio, or do you need access to the Web Audio API for audio transformation? 您只是想播放音频,还是需要访问Web Audio API进行音频转换? If you just need to play the audio, I can show you how to easily play this with the HTML5 Audio API (not the Web Audio one). 如果您只需要播放音频,我可以向您展示如何使用HTML5音频API(而非Web音频API)轻松播放。 And with HTML5 Audio, you can stream it using the technique below, and you can use the optimal audio/ogg;codecs=opus format. 而随着HTML5音频,您可以使用下面的技术可以流它你可以用最佳的audio/ogg;codecs=opus格式。

It's as simple as dynamically setting the source of your audio element, queried from the DOM via something like this: 它就像动态设置音频元素的源一样简单,通过以下方式从DOM查询:

(in HTML) (用HTML格式)

<audio id="myAudioElement" />

(in your JS) (在你的JS中)

var audio = document.getElementById('myAudioElement') || new Audio();
audio.src = yourUrl;

Your can also set the audio element's source via an XMLHttpRequest, but you won't get the streaming. 您还可以通过XMLHttpRequest设置音频元素的源,但是您不会获得流式传输。 But since you can use a POST method, you're not limited to the text length of a GET request (for this API, ~6KB). 但由于您可以使用POST方法,因此您不限于GET请求的文本长度(对于此API,大约6KB)。 To set it in xhr, you create a data uri from a blob response: 要在xhr中设置它,您可以从blob响应创建数据uri:

    xhr.open('POST', encodeURI(url), true);
    xhr.setRequestHeader('Content-Type', 'application/json');
    xhr.responseType = 'blob';
    xhr.onload = function(evt) {
      var blob = new Blob([xhr.response], {type: 'audio/ogg'});
      var objectUrl = URL.createObjectURL(blob);
      audio.src = objectUrl;
      // Release resource when it's loaded
      audio.onload = function(evt) {
        URL.revokeObjectURL(objectUrl);
      };
      audio.play();
    };
    var data = JSON.stringify({text: yourTextToSynthesize});
    xhr.send(data);

As you can see, with XMLHttpRequest, you have to wait until the data are fully loaded to play. 如您所见,使用XMLHttpRequest,您必须等到数据完全加载才能播放。 There may be a way to stream from XMLHttpRequest using the very new Media Source Extensions API, which is currently available only in Chrome and IE (no Firefox or Safari). 可能有一种方法可以使用最新的Media Source Extensions API从XMLHttpRequest流式传输,该API目前仅在Chrome和IE(无Firefox或Safari)中可用。 This is an approach I'm currently experimenting with. 这是我正在尝试的方法。 I'll update here if I'm successful. 如果我成功,我会在这里更新。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM