简体   繁体   English

将音频播放器UI连接到AudioContext.destination

[英]Connect an audio player UI to an AudioContext.destination

There are various ReactJS components that provide UI to control audio playback. 有各种ReactJS组件提供UI来控制音频播放。 Most of these assume you will provide audio file paths. 其中大多数假设您将提供音频文件路径。 How can I instead tell the UI to control the audio playback of an AudioContext.destination node ? 如何告诉UI控制AudioContext.destination节点的音频播放?

The AudioContext will have various sources and intermediate nodes. AudioContext将具有各种源和中间节点。 I want the UI component to give information (current time position; volume status) and control (play/pause; volume, mute) to the user, correspondingly on the AudioContext. 我希望UI组件向用户提供信息(当前时间位置;音量状态)和控制(播放/暂停;音量,静音),相应地在AudioContext上。

Unfortunately there is no simple way to map the transport functions of an AudioElement to an AudioContext but there are of course some similarities. 遗憾的是,没有简单的方法将AudioElement的传输函数映射到AudioContext,但当然有一些相似之处。

I don't use any React in the following examples but it should hopefully be fairly simple to wrap the code snippets in a Component that can be consumed by your frontend framework of choice. 我在下面的示例中没有使用任何React,但是希望将代码片段包装在可以由您选择的前端框架使用的Component中相当简单。

Let's say you have an instance of an AudioContext. 假设您有一个AudioContext的实例。

const audioContext = new AudioContext();

In this case the audioContext is only used to play a simple continuous sine wave by using an OscillatorNode. 在这种情况下, audioContext仅用于通过使用OscillatorNode播放简单的连续正弦波。

const oscillatorNode = new OscillatorNode(audioContext);

oscillatorNode.start();
oscillatorNode.connect(audioContext.destination);

The oscillatorNode could of course be stopped by calling oscillatorNode.stop() but this would render the oscillatorNode useless. oscillatorNode当然可以通过调用停止oscillatorNode.stop()但这将会使oscillatorNode没用。 It can't be started again. 它无法再次启动。 You would also have to do this for every OscillatorNode in case there are more than one. 如果有多个OscillatorNode,您还必须为每个OscillatorNode执行此操作。

But there is a way to pause the whole AudioContext by suspending it. 但是有一种方法可以暂停整个AudioContext来暂停它。

audioContext.suspend();

This will return a promise that resolves when the AudioContext is paused. 这将返回一个在AudioContext暂停时解析的promise。 To get the AudioContext running again you can use its resume() method. 要再次运行AudioContext,可以使用其resume()方法。

audioContext.resume();

Just like the suspend() method resume() returns a promise which resolves when the context is running again. 就像suspend()方法一样, resume()返回一个promise,它在上下文再次运行时解析。

In addition to that an AudioContext has also a state property which can be used to find out if the audioContext is 'running' , 'suspended' or 'closed' . 除此之外,AudioContext还有一个state属性,可用于查明audioContext'running''suspended'还是'closed'

Controlling the volume of the whole audioContext is a bit more tricky. 控制整个audioContext的音量有点棘手。 Every AudioContext has a destination which is the AudioNode which everything has to be connected to. 每个AudioContext都有一个目的地,即所有必须连接的AudioNode。 But the destination does not allow to modify the volume. 但目的地不允许修改音量。 I think the easiest way to get this functionality is to use an additional GainNode as a proxy. 我认为获得此功能的最简单方法是使用额外的GainNode作为代理。

const destinationGainNode = new GainNode(audioContext);

destinationGainNode.connect(audioContext.destination);

Then you have to make sure that you connect everything to the destinationGainNode instead. 然后,您必须确保将所有内容连接到destinationGainNode In case of the oscillatorNode introduced above that would look like this: 如果上面介绍的oscillatorNode看起来像这样:

oscillatorNode.connect(destinationGainNode);

With that proxy in place you can control the volume by using the gain AudioParam of the destinationGainNode . 使用该代理,您可以使用destinationGainNodegain AudioParam来控制音量。 To mute the signal call ... 要使信号呼叫静音...

destinationGainNode.gain.value = 0;

... and to unmute it again just call ... ...再次取消静音只需拨打电话......

destinationGainNode.gain.value = 1;

I hope this helps to create a React Component to control an AudioContext. 我希望这有助于创建一个React组件来控制AudioContext。

Please note, that all examples use the latest syntax of the Web Audio API which is not yet available in Edge and Safari. 请注意,所有示例都使用Edge Audio和Safari中尚未提供的Web Audio API的最新语法。 To get the examples working in these browsers a polyfill is needed. 要使示例在这些浏览器中运行,需要使用polyfill。 I do of course recommend standardized-audio-context as I am the author of that package. 我当然推荐标准化的音频上下文,因为我是该软件包的作者。 :-) :-)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM