简体   繁体   English

使用AudioBufferSource时Javascript Web音频API AnalyserNode不起作用

[英]Javascript Web Audio API AnalyserNode Not Working when using AudioBufferSource

I'm trying to build an "offline" benchmark version of a VAD algorithm I'm working on. 我正在尝试构建我正在研究的VAD算法的“离线”基准版本。 in the online version I'm using createMediaStreamSource for input to an analyser node and it works perfectly fine. 在在线版本中,我使用createMediaStreamSource输入到分析器节点,它工作得很好。 at the offline version i want to load and split a recorded audio file, so I'm using an xhr to load the file as ArrayBuffer and then splitting it (so it will simulate an audio stream) and using it as a source createBufferSource. 在脱机版本中,我想加载和拆分录制的音频文件,因此我正在使用xhr将文件加载为ArrayBuffer,然后拆分它(这样它将模拟音频流),并将其用作源createBufferSource。

this is the code for splitting the audioBuffer: 这是分割audioBuffer的代码:

let audio_dur = audioBuffer.duration;

  let segments_num = Math.ceil(audio_dur / segment_dur);
  let segment_length = Math.ceil(audioBuffer.length / segments_num);
  segmentsArr = new Array(segments_num);

  let AudioData = new Float32Array(audioBuffer.length);
  AudioData = audioBuffer.getChannelData(0);


  for (let i = 0; i <= segments_num-1; i++){
    segmentsArr[i] = AudioData.slice(i*segment_length,(i+1)*segment_length-1);
  }

then, the part for connecting it to the analyser: 然后,将其连接到分析器的部分:

const analyser = audioCtx.createAnalyser();
analyser.minDecibels = min_decibels;
analyser.fftSize = fft_size;

const T_data = new Float32Array(analyser.fftSize);
const F_data = new Uint8Array(analyser.frequencyBinCount);

let segments_num = segmentsArr.length;
let segment_length = segmentsArr[1].length;

var cur_Buffer = audioCtx.createBuffer(1, segment_length, audioCtx.sampleRate);

for (let segment_ind = 0; segment_ind <= segments_num-1; segment_ind++) {
  let cur_segment = segmentsArr[segment_ind];
  cur_Buffer.copyToChannel(cur_segment,0,0);

  let cur_source = audioCtx.createBufferSource();
  cur_source.loop = false;
  cur_source.buffer = newBuffer;

  cur_source.connect(analyser);

  analyser.getByteFrequencyData(F_data); // get current data
  analyser.getFloatTimeDomainData(T_data); // get current data
  ...

and the code goes on. 然后代码继续。

PROBLEM IS: the time data and frequency data returned from the analyser are always empty. 问题是:从分析仪返回的时间数据和频率数据始终为空。

before asked: 1. minDecibels is at -100Db (lowest possible). 询问前:1. minDecibels为-100Db(最低)。 2. the segmentsArr is not empty and I'm able to play it segment-by-segment, using the exact same way for creating AudioBufferSource and then connecting it to audio destination. 2. segmentArr不为空,我可以使用创建AudioBufferSource并将其连接到音频目标的完全相同的方法逐段播放。

ANSWERED: Thanks to @cwilso, the problem was I haven't used cur_source.start at each for every new source, thanks a lot. 回答:感谢@cwilso,问题是我没有为每个新源都使用cur_source.start ,非常感谢。

It's hard to see from this code exactly how this is connected and the code is starting. 从这段代码中很难确切看到它是如何连接的以及代码是如何开始的。

1) You're calling start() on the buffer source nodes, right? 1)您正在缓冲区源节点上调用start(),对吗? 2) You're calling getByteFrequencyData(), etc after that start happens? 2)您正在调用getByteFrequencyData()等启动之后发生吗? 3) You do hear the buffer chunks being played through the destination? 3)您是否确实听到了通过目的地播放的缓冲块?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM