简体   繁体   English

AnalyserNode.getFloatFrequencyData 总是返回 -Infinity

[英]AnalyserNode.getFloatFrequencyData always returns -Infinity

Alright, so I'm trying to determine the intensity (in dB) on samples of an audio file which is recorded by the user's browser.好的,所以我正在尝试确定用户浏览器记录的音频文件样本的强度(以 dB 为单位)。

I have been able to record it and play it through an HTML element.我已经能够记录它并通过 HTML 元素播放它。 But when I try to use this element as a source and connect it to an AnalyserNode, AnalyserNode.getFloatFrequencyData always returns an array full of -Infinity, getByteFrequencyData always returns zeroes, getByteTimeDomainData is full of 128.但是,当我尝试将此元素用作源并将其连接到 AnalyserNode 时,AnalyserNode.getFloatFrequencyData 总是返回一个充满 -Infinity 的数组,getByteFrequencyData 总是返回零,getByteTimeDomainData 充满了 128。

Here's my code:这是我的代码:

var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var source;

var analyser = audioCtx.createAnalyser();

var bufferLength = analyser.frequencyBinCount;
var data = new Float32Array(bufferLength);

mediaRecorder.onstop = function(e) {
  var blob = new Blob(chunks, { 'type' : 'audio/ogg; codecs=opus' });

  chunks = [];
  var audioURL = window.URL.createObjectURL(blob);
  // audio is an HTML audio element
  audio.src = audioURL;

  audio.addEventListener("canplaythrough", function() {
      source = audioCtx.createMediaElementSource(audio);

      source.connect(analyser);
      analyser.connect(audioCtx.destination);

      analyser.getFloatFrequencyData(data);
      console.log(data);
  });
}

Any idea why the AnalyserNode behaves like the source is empty/mute?知道为什么 AnalyserNode 表现得像源为空/静音吗? I also tried to put the stream as source while recording, with the same result.我还尝试在录制时将流作为源,结果相同。

You need to fetch the audio file and decode the audio buffer.您需要获取音频文件并解码音频缓冲区。 The url to the audio source must also be on the same domain or have have the correct CORS headers as well (as mentioned by Anthony).音频源的 url 也必须在同一个域中或具有正确的 CORS 标头(如 Anthony 所述)。

Note: Replace <FILE-URI> with the path to your file in the example below.注意:在下面的示例中,将<FILE-URI>替换为<FILE-URI>的路径。

var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var source;
var analyser = audioCtx.createAnalyser();
var button = document.querySelector('button');
var freqs;
var times;


button.addEventListener('click', (e) => {
  fetch("<FILE-URI>", {
    headers: new Headers({
      "Content-Type" : "audio/mpeg"
    })
  }).then(function(response){
    return response.arrayBuffer()
  }).then((ab) => {
    audioCtx.decodeAudioData(ab, (buffer) => {
      source = audioCtx.createBufferSource();
      source.connect(audioCtx.destination)
      source.connect(analyser);
      source.buffer = buffer;
      source.start(0);
      viewBufferData();
    });
  });
});

// Watch the changes in the audio buffer
function viewBufferData() {
  setInterval(function(){
    freqs = new Uint8Array(analyser.frequencyBinCount);
    times = new Uint8Array(analyser.frequencyBinCount);
    analyser.smoothingTimeConstant = 0.8;
    analyser.fftSize = 2048;
    analyser.getByteFrequencyData(freqs);
    analyser.getByteTimeDomainData(times);
    console.log(freqs)
    console.log(times)
  }, 1000)
}

I ran into the same issue, thanks to some of your code snippets, I made it work on my end (the code bellow is typescript and will not work in the browser at the moment of writing):我遇到了同样的问题,多亏了你的一些代码片段,我让它在我的最后工作(下面的代码是打字稿,在撰写本文时无法在浏览器中工作):

audioCtx.decodeAudioData(this.result as ArrayBuffer).then(function (buffer: AudioBuffer) { 
      soundSource = audioCtx.createBufferSource();
      soundSource.buffer = buffer;
      //soundSource.connect(audioCtx.destination); //I do not need to play the sound
      soundSource.connect(analyser);
      soundSource.start(0);

      setInterval(() => {
         calc(); //In here, I will get the analyzed data with analyser.getFloatFrequencyData 
      }, 300); //This can be changed to 0.
      // The interval helps with making sure the buffer has the data

Some explanation (I'm still a beginner when it comes to the Web Audio API, so my explanation might be wrong or incomplete): An analyzer needs to be able to analyze a specific part of your sound file.一些解释(我仍然是 Web Audio API 的初学者,所以我的解释可能是错误的或不完整的):分析器需要能够分析声音文件的特定部分。 In this case I create a AudioBufferSoundNode that contains the buffer that I got from decoding the audio data.在本例中,我创建了一个AudioBufferSoundNode ,其中包含我从解码音频数据中获得的缓冲区。 I feed the buffer to the source, which eventually will be able to be copied inside the Analyzer.我将缓冲区提供给源,最终将能够在分析器内复制。 However, without the interval callback, the buffer never seems to be ready and the analysed data contains -Inifinity (which I assume is the absence of any sound, as it has nothing to read) at every index of the array.但是,如果没有间隔回调,缓冲区似乎永远不会准备好,并且分析的数据在数组的每个索引处都包含-Inifinity (我认为没有任何声音,因为它没有任何可读取的内容)。 Which is why the interval is there.这就是间隔存在的原因。 It analyses the data every 300ms.它每 300 毫秒分析一次数据。

Hope this helps someone!希望这可以帮助某人!

If the source file from a different domain?如果源文件来自不同的域? That would fail in createMediaElementSource.这将在 createMediaElementSource 中失败。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM