简体   繁体   English

如何通过 JavaScript AudioWorklet 播放 MP3 文件?

[英]How to play an MP3 file via JavaScript AudioWorklet?


I've followed this example and created a custom AudioWorkletProcessor which works as expected.我按照这个示例创建了一个按预期工作的自定义 AudioWorkletProcessor。 What I'd like to do now is to stream MP3 audio from my server (I'm currently using Python/Flask) into it.我现在想做的是从我的服务器(我目前正在使用 Python/Flask)将 stream MP3 音频放入其中。

So, for example所以,例如

const response = await fetch(url);
const reader = response.body.getReader();

while (true) {
  const {value, done} = await reader.read();
  if (done) break;
  // do something with value
}

which gives me a Uint8Array .这给了我一个Uint8Array How do I pass its content to the AudioWorklet instead of the current channel[i] = Math.random() * 2 - 1;如何将其内容传递给 AudioWorklet 而不是当前的channel[i] = Math.random() * 2 - 1; ?

Thank you:)谢谢:)

Firstly, MP3 is a compressed audio file format but the Web Audio API nodes only work with uncompressed sample data.首先,MP3 是一种压缩音频文件格式,但 Web 音频 API 节点仅适用于未压缩的样本数据。 You'll need to use the decodeAudioData() method of the AudioContext object to convert the bytes of the MP3 file into an AudioBuffer object.您需要使用AudioContext object 的decodeAudioData()方法将 MP3 文件的字节转换为AudioBuffer object。

Secondly, decodeAudioData() isn't really designed for streaming but because you're using MP3 you're in luck.其次, decodeAudioData()并不是真正为流式传输而设计的,但因为您使用的是 MP3,所以您很幸运。 See Encoding fails when I fetch audio content partially for more information.有关详细信息,请参阅当我部分获取音频内容时编码失败

Thirdly, the AudioContext object isn't accessible from inside an AudioWorkletProcessor , so you'll have to call decodeAudioData() from the main thread and then pass the decompressed data from your AudioWorkletNode to your AudioWorkletProcessor using their respective message ports , which are accessible from the port property of each object.第三,无法从 AudioWorkletProcessor 内部访问AudioContext AudioWorkletProcessor ,因此您必须从主线程调用decodeAudioData() ,然后使用它们各自的消息端口将解压缩的数据从AudioWorkletNode到您的AudioWorkletProcessor ,这些消息端口可从每个 object 的port属性。

Fourthly, AudioBuffer isn't one of the allowed types that can be sent through a message port using postMessage() .第四, AudioBuffer不是可以使用postMessage()通过消息端口发送的允许类型之一。 Fortunately the Float32Array returned by the buffer's getChannelData() method is one of the supported types.幸运的是,缓冲区的getChannelData()方法返回的Float32Array是受支持的类型之一。

I'm not sure what your reason is for using an audio worklet.我不确定您使用音频工作集的原因是什么。 Depends on what you want to do with the MP3 but if all you want to do is play it then there are simpler solutions that involve lower CPU usage.取决于你想用 MP3 做什么,但如果你只想播放它,那么有更简单的解决方案,可以降低 CPU 使用率。

for (let i=0; i < value.length; i++) {
    channel[i] = value[i];
}

My approach to streaming audio would be to start with an <audio> tag or Audio object (same thing).我的流式音频方法是从<audio>标签或Audio object(相同的东西)开始。 That way the browser would handle all the streaming and decoding issues for me without further intervention.这样浏览器就可以为我处理所有的流媒体和解码问题,而无需进一步干预。 Then if I wanted to transfer the audio into the Web Audio API to do some client side real-time post processing then I'd use a MediaElementAudioSourceNode to achieve that.然后,如果我想将音频传输到 Web 音频 API 以进行一些客户端实时后处理,那么我将使用MediaElementAudioSourceNode来实现。 Then I'd use the built-in audio node types like BiquadFilterNode for EQ, DynamicsCompressorNode for dynamic range compression, and ConvolverNode for reverb.然后我会使用内置的音频节点类型,例如用于 EQ 的BiquadFilterNode 、用于动态范围压缩的DynamicsCompressorNode和用于混响的ConvolverNode Only if I needed to do something that it wasn't possible to construct using the built-in node types no matter what combination they were assembled in, only then would I start writing an audio worklet.只有当我需要做一些无法使用内置节点类型构建的事情时,无论它们以何种组合组装,我才会开始编写音频工作集。 And unfortunately there are a few common things that the built-in Web Audio nodes cannot do.不幸的是,内置的 Web 音频节点无法完成一些常见的事情。 A single pole filter with a variable cutoff is one example (though if a fixed cutoff is acceptable then an IIRFilterNode can be used).具有可变截止频率的单极点滤波器就是一个示例(尽管如果可以接受固定截止频率,则可以使用IIRFilterNode )。 Sample and hold is another.采样和保持是另一个。 Being able to work around these kinds of limitations is why audio worklets rock and are super useful as small components within a larger system of nodes.能够解决这些类型的限制是音频 worklets 摇滚的原因,并且作为大型节点系统中的小组件非常有用。

A second thing I wanted to clarify with respect to my previous answer is that although my suggestion isn't wrong per se (that is how you'd get any kind data into an AudioWorkletProcessor that isn't an input or an AudioParam , for example to emulate something similar to the buffer property on the ConvolverNode ), your approach using fetch() and the phrase, "Pass its content to the AudioWorklet" (ie the Uint8Array ) led me down a dubious line of thinking.关于我之前的回答,我想澄清的第二件事是,虽然我的建议本身并没有错(这就是你如何将任何类型的数据放入不是输入AudioParam AudioWorkletProcessor ,例如模拟类似于ConvolverNode上的buffer属性的东西),您使用fetch()和短语“将其内容传递给 AudioWorklet”(即Uint8Array )的方法使我陷入了一个可疑的思路。 What you have is a stream of data.您拥有的是stream的数据。 The fact that the server sends it as MP3 is irrelevant.服务器将其作为 MP3 发送的事实无关紧要。 The Web Audio API works using a streams paradigm too. Web 音频 API 也使用流范例工作。 The more common ways to get data into an audio worklet are either:将数据导入音频工作集的更常见方法是:

  1. To give the worklet an input .给工作集一个输入 Then you can pass an instance of the worklet as a parameter to another node's connect() method, or,然后您可以将工作集的实例作为参数传递给另一个节点的connect()方法,或者,

  2. By giving the worklet an AudioParam .通过给工作集一个AudioParam

Which one is appropriate depends on what you will be doing with the content of the MP3 but if you don't already have another input defined then it's probably an input.哪一个合适取决于您将如何处理 MP3 的内容,但如果您还没有定义另一个输入,那么它可能是一个输入。 To get the stream of MP3 data into the format the Web Audio API uses for its streams you need a MediaElementAudioSourceNode (or an AudioBufferSourceNode for a non-streaming MP3 file).要将 stream 的 MP3 数据转换为 Web 音频 API 用于其流的格式,您需要一个MediaElementAudioSourceNode (或一个非流式 MP3 文件的AudioBufferSourceNode )。 Then connect the MediaElementAudioSourceNode into your worklet using connect() .然后使用connect()MediaElementAudioSourceNode连接到您的工作集中。 Inputs are all interchangeable so if implemented this way then your worklet would be able to process any kind of audio data connected into it and wouldn't be limited to only processing MP3.输入都是可互换的,因此如果以这种方式实现,那么您的工作集将能够处理连接到其中的任何类型的音频数据,而不仅限于处理 MP3。 When you're writing worklet code you usually don't care about where the audio is coming from.当您编写工作集代码时,您通常不关心音频的来源。 All inputs and AudioParams are just streams of samples (without compression).所有输入和AudioParams都只是样本流(没有压缩)。

This Chrome Developers blog post covers how to make worklets that have their own inputs and AudioParam s.这篇Chrome 开发者博文介绍了如何制作具有自己的输入和AudioParam的工作集。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM