简体   繁体   English

Javascript 信号处理阵列

[英]Javascript Array for Signal Processing

I am trying to record and edit my voice in javascript. Specifically, I am trying to record it in an array that looks like this for my boss [0,102, 301,...] where the values are samples of my voice.我正在尝试在 javascript 中录制和编辑我的声音。具体来说,我正在尝试将它录制在一个数组中,看起来像我的老板 [0,102, 301,...],其中的值是我的声音样本。

When I record my voice in javascript, I get a Blob type.当我在 javascript 中录制我的声音时,我得到一个 Blob 类型。 Is there any way to transform a Blob into the [x, y, z,...] array?有没有办法将 Blob 转换为 [x, y, z,...] 数组? Or how is javascript signal processing normally completed?或者javascript信号处理是如何正常完成的?

This is code from this medium article that is how we are doing things.这是这篇媒体文章中的代码,这就是我们做事的方式。 I just can't share the actual company code.我只是不能分享实际的公司代码。

const recordAudio = () =>
    new Promise(async resolve => {
        const stream = await navigator.mediaDevices.getUserMedia({ audio:true});
        const mediaRecorder = new MediaRecorder(stream);
        const audioChunks = [];

        mediaRecorder.addEventListener("dataavailable", event => {
            audioChunks.push(event.data);
        });

        const start = () => mediaRecorder.start();

        const stop = () =>
            new Promise(resolve => {
                mediaRecorder.addEventListener("stop", () => {
                    console.log(audioChunks);
                    console.log(audioChunks)
                    const audioBlob = new Blob (audioChunks);
                    const audioURL = URL.createObjectURL(audioBlob);
                    const audio = new Audio(audioURL);
                    const play = () => audio.play();
                    resolve({ audioBlob, audioURL, play });
                });

                mediaRecorder.stop();
            });

            resolve({ start, stop});
        });

    const sleep = time => new Promise(resolve => setTimeout(resolve, time));

    const handleAction = async () => {
        const recorder = await recordAudio();
        const actionButton = document.getElementById('action');
        actionButton.disabled = true;
        recorder.start();
        await sleep(3000);
        const audio = await recorder.stop();
        audio.play();
        await sleep(3000);
        actionButton.disabled = false;

    }

you can use AudioContext and provide userMediaStream to it, then you can pick up an UInt8Array() that you want with the raw time domain signal, or already transformed frequency domain signal.您可以使用 AudioContext 并向其提供 userMediaStream,然后您可以使用原始时域信号或已转换的频域信号获取所需的 UInt8Array()。

Here you can check more details.在这里您可以查看更多详细信息。

https://developer.mozilla.org/en-US/docs/Web/API/AnalyserNode https://developer.mozilla.org/zh-CN/docs/Web/API/AnalyserNode

//initialize your signal catching system
let audioContext = new AudioContext();
let analyser = audioContext.createAnalyser();
navigator.mediaDevices.getUserMedia({audio: true}).then(stream => {
    let source = audioContext.createMediaStreamSource(stream);
    source.connect(analyser);
})

//then update the array with signal every milisecond
setInterval(() => {
    const bufferLength = analyser.frequencyBinCount;
    const dataArray = new Uint8Array(bufferLength);
    //get time domain signal
    analyser.getByteTimeDomainData(dataArray);
    //get frequency domain signal
    analyser.getByteFrequencyData(dataArray)
    console.log(dataArray)
}, 1)

as for visualization it works ok, with recording there might be a problem with repeating signal if you pick it up couple times before change, or there will be holes in data, but i cant figure out how to read directly from the stream.至于可视化,它工作正常,如果您在更改之前多次拾取它,则记录可能会出现重复信号的问题,或者数据中会有漏洞,但我无法弄清楚如何直接从 stream 读取。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM