简体   繁体   English

Web Audio API Analyzer节点不使用麦克风输入

[英]Web Audio API Analyser Node Not Working With Microphone Input

The bug preventing getting microphone input per http://code.google.com/p/chromium/issues/detail?id=112367 for Chrome Canary is now fixed. 针对Chrome Canary的http://code.google.com/p/chromium/issues/detail?id=112367阻止获取麦克风输入的错误现已修复。 This part does seem to be working. 这部分似乎确实有效。 I can assign the mic input to an audio element and hear the results through the speaker. 我可以将麦克风输入分配给音频元素,并通过扬声器听到结果。

But I'd like to connect an analyser node in order to do FFT. 但我想连接分析仪节点以进行FFT。 The analyser node works fine if I set the audio source to a local file. 如果我将音频源设置为本地文件,分析器节点可以正常工作。 The problem is that when connected to the mic audio stream, the analyser node just returns the base value as if it doesn't have an audio stream at all. 问题是当连接到mic音频流时,分析器节点只返回基值,就好像它根本没有音频流一样。 (It's -100 over and over again if you're curious.) (如果你好奇的话,一遍又一遍-100。)

Anyone know what's up? 谁知道怎么了? Is it not implemented yet? 它还没有实现吗? Is this a chrome bug? 这是一个铬虫吗? I'm running 26.0.1377.0 on Windows 7 and have the getUserMedia flag enabled and am serving through localhost via python's simpleHTTPServer so it can request permissions. 我在Windows 7上运行26.0.1377.0并启用了getUserMedia标志,并通过python的simpleHTTPServer通过localhost服务,因此它可以请求权限。

Code: 码:

var aCtx = new webkitAudioContext();
var analyser = aCtx.createAnalyser();
if (navigator.getUserMedia) {
  navigator.getUserMedia({audio: true}, function(stream) {
    // audio.src = "stupid.wav"
    audio.src = window.URL.createObjectURL(stream);
  }, onFailure);
}
$('#audio').on("loadeddata",function(){
    source = aCtx.createMediaElementSource(audio);
    source.connect(analyser);
    analyser.connect(aCtx.destination);
    process();
});

Again, if I set audio.src to the commented version, it works, but with microphone it is not. 同样,如果我将audio.src设置为注释版本,它可以工作,但是使用麦克风则不行。 Process contains: 流程包含:

FFTData = new Float32Array(analyser.frequencyBinCount);
analyser.getFloatFrequencyData(FFTData);
console.log(FFTData[0]);

I've also tried using the createMediaStreamSource and bypassing the audio element - example 4 - https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/webrtc-integration.html . 我也尝试过使用createMediaStreamSource并绕过音频元素 - 例4 - https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/webrtc-integration.html Also unsuccessful. 也不成功。 :( :(

    if (navigator.getUserMedia) {
        navigator.getUserMedia({audio: true}, function(stream) {
        var microphone = context.createMediaStreamSource(stream);
        microphone.connect(analyser);
        analyser.connect(aCtx.destination);
        process();
    }

I imagine it might be possible to write the mediasteam to a buffer and then use dsp.js or something to do fft, but I wanted to check first before I go down that road. 我想有可能将mediasteam写入缓冲区,然后使用dsp.js或其他东西来做fft,但我想在我走这条路之前先检查一下。

It was a variable scoping issue. 这是一个可变范围问题。 For the second example, I was defining the microphone locally and then trying to access its stream with the analyser in another function. 对于第二个示例,我在本地定义麦克风,然后尝试使用分析器在另一个函数中访问其流。 I just made all the Web Audio API nodes globals for peace of mind. 我只是让所有的Web Audio API节点全局化,让我们高枕无忧。 Also it takes a few seconds for the analyser node to start reporting non -100 values. 此外,分析器节点还需要几秒钟才能开始报告非-100值。 Working code for those interested: 感兴趣的人的工作代码:

// Globals
var aCtx;
var analyser;
var microphone;
if (navigator.getUserMedia) {
    navigator.getUserMedia({audio: true}, function(stream) {
        aCtx = new webkitAudioContext();
        analyser = aCtx.createAnalyser();
        microphone = aCtx.createMediaStreamSource(stream);
        microphone.connect(analyser);
        // analyser.connect(aCtx.destination);
        process();
    });
};
function process(){
    setInterval(function(){
        FFTData = new Float32Array(analyser.frequencyBinCount);
        analyser.getFloatFrequencyData(FFTData);
        console.log(FFTData[0]);
    },10);
}

If you would like to hear the live audio, you can connect the analyser to destination (speakers) as commented out above. 如果您想听实时音频,可以将分析仪连接到目的地(扬声器),如上所述。 Watch out for some lovely feedback though! 请注意一些可爱的反馈!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM