[英]Web Audio API - Stereo to Mono
I need to convert an stereo input (channelCount: 2) stream comming from chrome.tabCapture.capture
to a mono stream and send it to a server, but keep the original audio unchanged.我需要将来自chrome.tabCapture.capture
的立体声输入 (channelCount: 2) 流转换为单声道流并将其发送到服务器,但保持原始音频不变。
I've tried several things but the destination.stream
always has 2 channels.我尝试了几件事,但destination.stream
总是有 2 个频道。
const context = new AudioContext()
const splitter = context.createChannelSplitter(1)
const merger = context.createChannelMerger(1)
const source = context.createMediaStreamSource(stream)
const dest = context.createMediaStreamDestination()
splitter.connect(merger)
source.connect(splitter)
source.connect(context.destination) // audio unchanged
merger.connect(dest) // mono audio sent to "dest"
console.log(dest.stream.getAudioTracks()[0].getSettings()) // channelCount: 2
I've also tried this:我也试过这个:
const context = new AudioContext()
const merger = context.createChannelMerger(1)
const source = context.createMediaStreamSource(stream)
const dest = context.createMediaStreamDestination()
source.connect(context.destination)
source.connect(merger)
merger.connect(dest)
console.log(dest.stream.getAudioTracks()[0].getSettings()) // channelCount: 2
and this:还有这个:
const context = new AudioContext()
const source = context.createMediaStreamSource(stream)
const dest = context.createMediaStreamDestination({
channelCount: 1,
channelCountMode: 'explicit'
})
sourcer.connect(context.destination)
soruce.connect(dest)
console.log(dest.stream.getAudioTracks()[0].getSettings()) // channelCount: 2
there has to be an easy way to achieve this... thanks!必须有一种简单的方法来实现这一点......谢谢!
There is a bug in Chrome which requires the audio to flow before the channelCount
property gets updated. Chrome 中存在一个错误,它要求音频在channelCount
属性更新之前流动。 It's 2 by default.默认为 2。
The following example assumes that the AudioContext
is running.以下示例假定AudioContext
正在运行。 Calling resume()
in response to a user action should work in case it's not allowed to run on its own.调用resume()
以响应用户操作应该在不允许它自己运行的情况下起作用。
const audioContext = new AudioContext();
const sourceNode = new MediaStreamAudioSourceNode(
audioContext,
{ mediaStream }
);
const destinationNode = new MediaStreamAudioDestinationNode(
audioContext,
{ channelCount: 1 }
);
sourceNode.connect(destinationNode);
setTimeout(() => {
console.log(destinationNode.stream.getAudioTracks()[0].getSettings());
}, 100);
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.