简体   繁体   English

Tarsos dsp Android AudioTrack播放静态或速度过快

[英]Tarsos dsp Android AudioTrack plays static or too fast

This problem has been frustrating me for a while. 这个问题使我沮丧了一段时间。 I'm trying to use Tarsos DSP to perform some basic signal processing on a project I'm working on for Android. 我正在尝试使用Tarsos DSP在我正在为Android开发的项目上执行一些基本的信号处理。 The audio comes from a standard WAV file that is 44.1k, 16bit stereo. 音频来自44.1k,16位立体声的标准WAV文件。 When I set up and run a Tarsos AudioDispatcher using an AudioProcessor that uses Android's AudioTrack to output sound I get static or audio that plays way too fast. 当我使用使用Android的AudioTrack来输出声音的AudioProcessor设置并运行Tarsos AudioDispatcher时,我会得到静态声音或播放速度过快的声音。

Here is the code that sets up the Audio Dispatcher 这是设置音频分派器的代码

public void Play(String source, double startTime, final double endTime){
    InputStream wavStream;
    try {
        wavStream = new FileInputStream(source);
        UniversalAudioInputStream audioStream = new UniversalAudioInputStream(wavStream, audioFormat); 
        dispatcher = new AudioDispatcher(audioStream, bufferSize, overLap);
        AndroidAudioPlayer player = new AndroidAudioPlayer(audioFormat, buffersize);
        dispatcher.addAudioProcessor(player);
        dispatcher.skip(startTime);
        new Thread(new Runnable() {
            @Override
            public void run() {
                while (dispatcher.secondsProcessed() < endTime) {
                    try {
                        Thread.sleep(10);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                }
                dispatcher.stop();
            }
        }).start();
        dispatcher.run();
        try {
            audioStream.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }catch (FileNotFoundException e){e.printStackTrace();}
}

One thing that I've noted is that if I let the AudioDispatcher run through the entire WAV file it reports a total number of seconds processed that is longer than what is indicated in the header of the WAV file which makes the methods that set the start and end times inaccurate but still within bounds (usually). 我注意到的一件事是,如果我让AudioDispatcher在整个WAV文件中运行,它将报告处理的总秒数,该秒数比WAV文件的标题中指示的秒数长,这使设置开始的方法成为可能。和结束时间不准确,但仍然在范围之内(通常)。 (Why does this happen?)** (为什么会这样?)**

Here is the code for the AndroidAudioPlayer that implements a Tarsos AudioProcessor: 这是实现Tarsos AudioProcessor的AndroidAudioPlayer的代码:

public class AndroidAudioPlayer implements AudioProcessor {
private AudioTrack audioTrack;
AndroidAudioPlayer(TarsosDSPAudioFormat audioFormat, int bufferSize){
    audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
            (int)audioFormat.getSampleRate(),
            AudioFormat.CHANNEL_OUT_STEREO,
            AudioFormat.ENCODING_PCM_16BIT,
            bufferSize,
            AudioTrack.MODE_STREAM);
}
@Override
public boolean process(AudioEvent audioEvent){
    short[] shorts = new short[audioEvent.getBufferSize() / 2];
    ByteBuffer.wrap(audioEvent.getByteBuffer()).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
    audioTrack.write(shorts, 0, shorts.length);
    audioTrack.play();
    return true;
}

@Override
public void processingFinished(){}

} }

I have another audio processor that I wrote that uses an AudioDispatcher to write a clip from the WAV file using JavaZoom which also produces static or incorrect audio. 我编写了另一个音频处理器,该音频处理器使用AudioDispatcher使用JavaZoom从WAV文件写入剪辑,该剪辑还会产生静态或不正确的音频。 However, when I write a clip from the WAV file using an InputStream and JavaZoom it works fine some of the time or produces static which I am assuming is because the startTime and stopTime variables are set incorrectly by my methods that rely on Tarsos. 但是,当我使用InputStream和JavaZoom从WAV文件写入剪辑时,在某些情况下可以正常工作或产生静态效果,这是因为我的依赖于Tarsos的方法对startTime和stopTime变量的设置不正确。 Any insight would be greatly appreciated. 任何见识将不胜感激。

Before the methods above are called, I initially call a method that uses AudioDispatcher on the same WAV file with Oscilloscope and ComplexOnsetDetector audio processors to generate a waveform view and fill an array with timecodes for onsets. 在调用上述方法之前,我首先调用一个方法,该方法通过示波器和ComplexOnsetDetector音频处理器在同一WAV文件上使用AudioDispatcher来生成波形视图,并用时间码填充数组。 The audioFormat variable is created like this: TarsosDSPAudioFormat audioFormat = new TarsosDSPAudioFormat(sampleRate, 16, 2, false, false); 像这样创建audioFormat变量: TarsosDSPAudioFormat audioFormat = new TarsosDSPAudioFormat(sampleRate, 16, 2, false, false); , the sample rate is read from the WAV file and I've checked that it reads correctly. ,则从WAV文件中读取了采样率,并且我检查了它是否正确读取。 *The buffer size is 1024 and overlap is 512 and I've tried playing with all of these values. *缓冲区大小为1024,重叠部分为512,我尝试使用所有这些值。

I have changed the buffer size to 64kb and the overlap to 32kb. 我将缓冲区大小更改为64kb,将重叠部分更改为32kb。 When the audio plays, it sounds almost correct, skipping a little bit. 播放音频时,听起来几乎是正确的,只是略过一点。 However, it still sometimes only plays static and no matter what length of WAV file I use, the AudioDispatcher reports that it was 315 seconds long* . 但是,有时它仍然只能播放静态内容,无论我使用什么长度的WAV文件, AudioDispatcher报告它的长度为315秒*

**I have fixed this problem. **我已解决此问题。 I was loading a WAV file that was being created by JavaZoom MP3 converter which was continually overwriting a file without deleting it first. 我正在加载由JavaZoom MP3转换器创建的WAV文件,该文件不断覆盖文件而没有先删除它。 I think Tarsos was using the file length to determine the play length which was incorrect due to the overwriting with out deletion. 我认为Tarsos正在使用文件长度来确定播放长度,该播放长度由于未删除而被覆盖,因此是不正确的。 Deleting the file first solved the problem. 删除文件首先解决了问题。

I just need to figure out why the audio is skipping during playback and sometimes just plays static and then I think I'm good to go. 我只需要弄清楚为什么音频在播放过程中会跳过,有时只是播放静态,然后我觉得很好。

Use mono(1 channel) input. 使用单声道(1声道)输入。 TarsosDSP audio processors does not support stereo currently. TarsosDSP音频处理器当前不支持立体声。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM