简体   繁体   English

如何使用 AudioContext 可视化来自 Blob 的录制音频?

[英]How to visualize recorded audio from Blob with AudioContext?

I have successfully created an audio wave visualizer based on the mdn example here .我已经根据此处的 mdn 示例成功创建了一个音频波可视化器。 I now want to add visualization for recorded audio as well.我现在也想为录制的音频添加可视化。 I record the audio using MediaRecorder and save the result as a Blob.我使用 MediaRecorder 录制音频并将结果保存为 Blob。 However I cannot find a way to connect my AudioContext to the Blob.但是我找不到将我的 AudioContext 连接到 Blob 的方法。

This is the relevant code part so far:到目前为止,这是相关的代码部分:

var audioContext = new (window.AudioContext || window.webkitAudioContext)();
var analyser = audioContext.createAnalyser();
var dataArray = new Uint8Array(analyser.frequencyBinCount);
        
if (mediaStream instanceof Blob)
    // Recorded audio - does not work
    var stream = URL.createObjectURL(mediaStream);
else
    // Stream from the microphone - works
    stream = mediaStream;

var source = audioContext.createMediaStreamSource(stream);
source.connect(analyser);

mediaStream comes from either: mediaStream 来自:

navigator.mediaDevices.getUserMedia ({
    audio: this.audioConstraints,
    video: this.videoConstraints,
})
.then( stream => {
      mediaStream = stream;
}

or as a result of the recorded data:或作为记录数据的结果:

mediaRecorder.addEventListener('dataavailable', event => {
    mediaChunks.push(event.data);
});
...
mediaStream = new Blob(mediaChunks, { 'type' : 'video/webm' });

How do I connect the AudioContext to the recorded audio?如何将 AudioContext 连接到录制的音频? Is it possible with a Blob? Blob可以吗? Do I need something else?我需要别的东西吗? What am I missing?我错过了什么?

I've created a fiddle .我创建了一个fiddle The relevant part starts at line 118.相关部分从第 118 行开始。

Thanks for help and suggestions.感谢您的帮助和建议。

EDIT: Thanks to Johannes Klauß, I've found a solution.编辑:感谢 Johannes Klauß,我找到了解决方案。 See the updated fiddle .请参阅更新的小提琴

You can use the Response API to create an ArrayBuffer and decode that with the audio context to create an AudioBuffer which you can connect to the analyser:您可以使用响应 API 创建一个 ArrayBuffer 并使用音频上下文对其进行解码以创建一个可以连接到分析器的 AudioBuffer:

mediaRecorder.addEventListener('dataavailable', event => {
    mediaChunks.push(event.data);
});
...
const arrayBuffer = await new Response(new Blob(mediaChunks, { 'type' : 'video/webm' })).arrayBuffer();
const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);

const source = audioContext.createBufferSource();
source.buffer = audioBuffer;
source.connect(analyser);

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何将录制的音频(Blob)上传到服务器? - ReactJS - how to upload a Recorded Audio(Blob) to server ? - ReactJS 如何从AudioContext.createAnalyser()控制音频? - How to control Audio from AudioContext.createAnalyser()? 如何从Three.JS AudioContext将音频连接到MediaStream - How to connect audio to MediaStream from Three.JS AudioContext `AudioContext().audioWorklet.addModule()`,如何通过`Blob`加载? - `AudioContext().audioWorklet.addModule()`, how to load by `Blob`? 使用 AudioContext 阻止音频在后台播放 - Using AudioContext stops the audio from playing in the background 在 AudioContext 中处理后听不到来自 RTCPeerConnection 的音频 - Audio from RTCPeerConnection is not audible after processing in AudioContext 如何控制(音频缓冲区)AudioContext()的音量? - How to control the sound volume of (audio buffer) AudioContext()? 如何使用 howler.js 在客户端播放录制的音频 blob? - How can you play a recorded audio blob on the client side using howler.js? AudioContext createMediaElementSource 来自从 blob 中获取数据的视频 - AudioContext createMediaElementSource from video fetching data from blob 如何对从javascript中的麦克风实时录制的音频进行下采样? - How to downsample audio recorded from mic realtime in javascript?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM