[英]How to visualize recorded audio from Blob with AudioContext?
I have successfully created an audio wave visualizer based on the mdn example here .我已经根据此处的 mdn 示例成功创建了一个音频波可视化器。 I now want to add visualization for recorded audio as well.
我现在也想为录制的音频添加可视化。 I record the audio using MediaRecorder and save the result as a Blob.
我使用 MediaRecorder 录制音频并将结果保存为 Blob。 However I cannot find a way to connect my AudioContext to the Blob.
但是我找不到将我的 AudioContext 连接到 Blob 的方法。
This is the relevant code part so far:到目前为止,这是相关的代码部分:
var audioContext = new (window.AudioContext || window.webkitAudioContext)();
var analyser = audioContext.createAnalyser();
var dataArray = new Uint8Array(analyser.frequencyBinCount);
if (mediaStream instanceof Blob)
// Recorded audio - does not work
var stream = URL.createObjectURL(mediaStream);
else
// Stream from the microphone - works
stream = mediaStream;
var source = audioContext.createMediaStreamSource(stream);
source.connect(analyser);
mediaStream comes from either: mediaStream 来自:
navigator.mediaDevices.getUserMedia ({
audio: this.audioConstraints,
video: this.videoConstraints,
})
.then( stream => {
mediaStream = stream;
}
or as a result of the recorded data:或作为记录数据的结果:
mediaRecorder.addEventListener('dataavailable', event => {
mediaChunks.push(event.data);
});
...
mediaStream = new Blob(mediaChunks, { 'type' : 'video/webm' });
How do I connect the AudioContext to the recorded audio?如何将 AudioContext 连接到录制的音频? Is it possible with a Blob?
Blob可以吗? Do I need something else?
我需要别的东西吗? What am I missing?
我错过了什么?
I've created a fiddle .我创建了一个fiddle 。 The relevant part starts at line 118.
相关部分从第 118 行开始。
Thanks for help and suggestions.感谢您的帮助和建议。
EDIT: Thanks to Johannes Klauß, I've found a solution.编辑:感谢 Johannes Klauß,我找到了解决方案。 See the updated fiddle .
请参阅更新的小提琴。
You can use the Response API to create an ArrayBuffer and decode that with the audio context to create an AudioBuffer which you can connect to the analyser:您可以使用响应 API 创建一个 ArrayBuffer 并使用音频上下文对其进行解码以创建一个可以连接到分析器的 AudioBuffer:
mediaRecorder.addEventListener('dataavailable', event => {
mediaChunks.push(event.data);
});
...
const arrayBuffer = await new Response(new Blob(mediaChunks, { 'type' : 'video/webm' })).arrayBuffer();
const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
const source = audioContext.createBufferSource();
source.buffer = audioBuffer;
source.connect(analyser);
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.