简体   繁体   中英

Split stereo audio file Into AudioNodes for each channel

How can i split a stereo audio file (I'm currently working with a WAV, but I'm interested in how to do it for MP3 as well, if that's different) into left and right channels to feed into two separate Fast Fourier Transforms (FFT) from the P5.sound.js library.

I've written out what I think I need to be doing below in the code, but I haven't been able to find examples of anyone doing this through Google searches and all my layman's attempts are turning up nothing.

I'll share what I have below, but in all honesty, it's not much. Everything in question would go in the setup function where I've made a note:

//variable for the p5 sound object
var sound = null;
var playing = false;

function preload(){
    sound = loadSound('assets/leftRight.wav');
}

function setup(){
    createCanvas(windowWidth, windowHeight);
    background(0);

    // I need to do something here to split the audio and return a AudioNode for just 
    // the left stereo channel. I have a feeling it's something like 
    // feeding audio.getBlob() to a FileReader() and some manipulation and then converting 
    // the result of FileReader() to a web audio API source node and feeding that into 
    // fft.setInput() like justTheLeftChannel is below, but I'm not understanding how to work 
    // with javascript audio methods and createChannelSplitter() and the attempts I've made 
    // have just turned up nothing.

    fft = new p5.FFT();
    fft.setInput(justTheLeftChannel);
}

function draw(){
    sound.pan(-1)
    background(0);
    push();
    noFill();
    stroke(255, 0, 0);
    strokeWeight(2);

    beginShape();
    //calculate the waveform from the fft.
    var wave = fft.waveform();
    for (var i = 0; i < wave.length; i++){
        //for each element of the waveform map it to screen 
        //coordinates and make a new vertex at the point.
        var x = map(i, 0, wave.length, 0, width);
        var y = map(wave[i], -1, 1, 0, height);

        vertex(x, y);
    }

    endShape();
    pop();
}

function mouseClicked(){
    if (!playing){
        sound.loop();
        playing = true;
    } else {
        sound.stop();
        playing = false;
    }
}

Solution:

I'm not a p5.js expert, but I've worked with it enough that I figured there has to be a way to do this without the whole runaround of blobs / file reading. The docs aren't very helpful for complicated processing, so I dug around a little in the p5.Sound source code and this is what I came up with:

// left channel
sound.setBuffer([sound.buffer.getChannelData(0)]);
// right channel
sound.setBuffer([sound.buffer.getChannelData(1)]);

Here's a working example - clicking the canvas toggles between L/stereo/R audio playback and FFT visuals.


Explanation:

p5.SoundFile has a setBuffer method which can be used to modify the audio content of the sound file object in place. The function signature specifies that it accepts an array of buffer objects and if that array only has one item, it'll produce a mono source - which is already in the correct format to feed to the FFT? So how do we produce a buffer containing only one channel's data?

Throughout the source code there are examples of individual channel manipulation via sound.buffer.getChannelData() . I was wary of accessing undocumented properties at first, but it turns out that since p5.Sound uses the WebAudio API under the hood, this buffer is really just plain old WebAudio AudioBuffer, and the getChannelData method is well-documented .

The only downside of approach above is that setBuffer acts directly on the SoundFile so I'm loading the file again for each channel you want to separate, but I'm sure there's a workaround for that.

Happy splitting!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM