简体   繁体   English

NAudio:使用ASIO用吉他录制音频和输出

[英]NAudio: Using ASIO to record audio and output with a guitar

I am currently working on a SR. 我目前正在研究SR。 design project which is a windows forms app which will allow for users to plug in their guitar and create distortions in real time and play it with input and output, and auto tab it into ASCII tabs. 设计项目是一个Windows窗体应用程序,允许用户插入他们的吉他并实时创建扭曲并使用输入和输出播放,并自动将其选中到ASCII选项卡。 Currently I am trying to get the real time listening portion working, I have the recording and implementation of distortions working just fine just having some issues using ASIO. 目前我正在努力让实时听力部分正常工作,我对失真的记录和实现工作得很好,只是在使用ASIO时遇到了一些问题。 I've looked at this post How to record and playback with NAudio using AsioOut but it was not of much help with my issue, here is my code: 我看过这篇文章如何使用AsioOut录制和播放NAudio,但它对我的问题没什么帮助,这是我的代码:

private BufferedWaveProvider buffer;
private AsioOut input;
private AsioOut output;

private void listenBtn_Click(object sender, EventArgs e)
{
    input = new AsioOut(RecordInCbox.SelectedIndex);
    WaveFormat format = new WaveFormat();
    buffer = new BufferedWaveProvider(format);
    buffer.DiscardOnBufferOverflow = true;
    input.InitRecordAndPlayback(buffer, 1, 44100);
    input.AudioAvailable += new EventHandler<AsioAudioAvailableEventArgs>(AudioAvailable);

    //output = new AsioOut(RecordInCbox.SelectedIndex);
    //output.Init(buffer);

    input.Play();
    //output.Play();
}

public void AudioAvailable(object sender, AsioAudioAvailableEventArgs e)
{
    byte[] buf = new byte[e.SamplesPerBuffer];
    e.WrittenToOutputBuffers = true;
    for (int i = 0; i < e.InputBuffers.Length; i++)
    {
        Array.Copy(e.InputBuffers, e.OutputBuffers, 1);
        Marshal.Copy(e.InputBuffers[i], buf, 0, e.SamplesPerBuffer);
        buffer.AddSamples(buf, 0, buf.Length);
    }
}

Currently it is getting the audio and pushing it into the buffer but the output is not working. 目前它正在获取音频并将其推入缓冲区但输出无效。 I am able to get it to play the guitar if I set the recording setting to Listen on windows but feel this is unnecessary as how I want to be able to perform my distortion and hear that as the output. 如果我将录制设置设置为在窗口上监听,我可以让它弹吉他但是我觉得这是不必要的,因为我希望能够执行失真并将其视为输出。 Thanks! 谢谢!

You don't need to add samples in buffer. 您不需要在缓冲区中添加样本。 The buffer only serves to determine the number of output channels you want. 缓冲区仅用于确定所需的输出通道数。 I've did it this way: 我这样做了:

[DllImport("Kernel32.dll", EntryPoint = "RtlMoveMemory", SetLastError = false)]
private static unsafe extern void MoveMemory(IntPtr dest, IntPtr src, int size);

private void OnAudioAvailable(object sender, AsioAudioAvailableEventArgs e)
    {
        for (int i = 0; i < e.InputBuffers.Length; i++)
        {
            MoveMemory(e.OutputBuffers[i], e.InputBuffers[i], e.SamplesPerBuffer * e.InputBuffers.Length);
        }
        e.WrittenToOutputBuffers = true;
    }

But doing like this feels a bit latency and a bit of echo and I don't know how to solve them. 但这样做会感觉有点延迟和一点回声,我不知道如何解决它们。 So if you have any ideas I'm here to listen. 所以,如果你有任何想法我会在这里听。

Unfortunately I could not find a way to get the ASIO to work, but I have come up with an alternative method which works just as well, as for the latency I got it down to 50 ms, but have been looking into the NAudio source to see if there might be a way to get it below that. 不幸的是我找不到让ASIO工作的方法,但是我已经提出了另一种方法,它的工作方式也一样好,因为我把它降到了50毫秒的延迟,但是一直在研究NAudio来源。看看是否有办法让它低于它。 (roughly around 20-30 ms) For a better realtime play. (大约20-30毫秒)为了更好的实时游戏。

    private BufferedWaveProvider buffer;
    private WaveOut waveOut;
    private WaveIn sourceStream = null;

    private bool listen = false;

    private void listenBtn_Click(object sender, EventArgs e)
    {
        listen = !listen;
        if (listen)
            listenBtn.Text = "Stop listening";
        else
        {
            listenBtn.Text = "Listen";
            sourceStream.StopRecording();
            return;
        }

        sourceStream = new WaveIn();
        sourceStream.WaveFormat = new WaveFormat(44100, 1);

        waveOut = new WaveOut(WaveCallbackInfo.FunctionCallback());

        sourceStream.DataAvailable += new EventHandler<WaveInEventArgs>(sourceStream_DataAvailable);
        sourceStream.RecordingStopped += new EventHandler<StoppedEventArgs>(sourceStream_RecordingStopped);

        buffer = new BufferedWaveProvider(sourceStream.WaveFormat);
        buffer.DiscardOnBufferOverflow = true;
        waveOut.DesiredLatency = 51;
        waveOut.Volume = 1f;
        waveOut.Init(buffer);
        sourceStream.StartRecording();
    }

    private void sourceStream_DataAvailable(object sender, WaveInEventArgs e)
    {
        buffer.AddSamples(e.Buffer, 0, e.BytesRecorded);

        waveOut.Play();
    }

    private void sourceStream_RecordingStopped(object sender, StoppedEventArgs e)
    {
        sourceStream.Dispose();
        waveOut.Dispose();
    }

Again I do understand that this is not using ASIO but it was a better alternative based on the resources I had available and the documentation. 我再次明白,这不是使用ASIO,但它是一个更好的选择,基于我可用的资源和文档。 Instead of using ASIO I am just creating the waveIn and mocking a "recording" but instead of writing that to a file I am taking the stream and pushing it into a waveOut buffer which will allow for it play after I do some sound manipulation. 我没有使用ASIO,而只是创建waveIn并模拟“录制”,而不是将其写入文件,而是将其传输到waveOut缓冲区,这将允许它在我执行一些声音操作后播放。

Probably I am wrong but I have successfully managed simultaneous Asio record and playback using NAudio with very low latencies (on very cheap USB audio hardware ;). 可能我错了,但我已成功管理同步Asio记录和播放使用NAudio具有非常低的延迟(非常便宜的USB音频硬件;)。

Instead of your event handler method used in your first example you may try this: 您可以尝试这样做,而不是在第一个示例中使用的事件处理程序方法:

    private float[] recordingBuffer = null;
    private byte[] recordingByteBuffer = null;

    private BufferedWaveProvider bufferedWaveProvider;
    private BufferedSampleProvider bsp;
    private SampleToWaveProvider swp;



    // somewhere in e.g. constructor
            // set up our signal chain
            bufferedWaveProvider = new BufferedWaveProvider(waveFormat);
            //bufferedWaveProvider.DiscardOnBufferOverflow = true;

            bsp = new BufferedSampleProvider(waveFormat);
            swp = new SampleToWaveProvider(bsp);
    // ...


    private void OnAudioAvailable(object sender, AsioAudioAvailableEventArgs e)
    {
        this.recordingBuffer = BufferHelpers.Ensure(this.recordingBuffer, e.SamplesPerBuffer * e.InputBuffers.Length);
        this.recordingByteBuffer = BufferHelpers.Ensure(this.recordingByteBuffer, e.SamplesPerBuffer  * 4 * e.InputBuffers.Length);

        int count = e.GetAsInterleavedSamples(this.recordingBuffer);

        this.bsp.CurrentBuffer = this.recordingBuffer;

        int count2 = this.swp.Read(this.recordingByteBuffer, 0, count * 4);

        bufferedWaveProvider.AddSamples(this.recordingByteBuffer, 0, this.recordingByteBuffer.Length);
    }

with class BufferedSampleProvider.cs: 使用类BufferedSampleProvider.cs:

public class BufferedSampleProvider : ISampleProvider
{
    private WaveFormat waveFormat;
    private float[] currentBuffer;

    public BufferedSampleProvider(WaveFormat waveFormat)
    {
        this.waveFormat = waveFormat;
        this.currentBuffer = null;
    }

    public float[] CurrentBuffer 
    {
        get { return this.currentBuffer; }
        set { this.currentBuffer = value; }
    }

    public int Read(float[] buffer, int offset, int count)
    {
        if (this.currentBuffer != null)
        {
            if (count <= currentBuffer.Length)
            {
                for (int i = 0; i < count; i++)
                {
                    buffer[i] = this.currentBuffer[i];
                }
                return count;
            }
        }
        return 0;
    }

    public WaveFormat WaveFormat
    {
        get { return this.waveFormat; }
    }
}

I have it done this (messy) way because otherwise I would have to copy the bytes from asio buffers dependent on sample byte count and so on (look at source code from GetAsInterleavedSamples(...) method). 我已经完成了这个(杂乱)方式,因为否则我将不得不从asio缓冲区复制字节,这取决于样本字节数等等(查看来自GetAsInterleavedSamples(...)方法的源代码)。 To keep it simple for me I have used a BufferedWaveProvider to be really sure there are enough (filled) buffers on the output side of my signal chain even when I'm not really needing it, but it's safe. 为了让我保持简单,我使用BufferedWaveProvider确保我的信号链的输出端有足够的(填充的)缓冲区,即使我不是真的需要它,但它是安全的。 After several processing blocks following this provider the chain ends up in the last provider "output". 在该提供者之后的几个处理块之后,链最终在最后一个提供者“输出”中。 The last provider was passed into 最后一个提供商被传入

asioOut.InitRecordAndPlayback(output, this.InputChannels, this.SampleRate);

when initializing the whole objects. 初始化整个对象时。 Even when I use many processing blocks in my chain, I have no hearable drops or buzzy sounds with asio buffer size of 512 samples. 即使我在我的链中使用了很多处理块,我也没有可听见的掉落或嗡嗡声,asio缓冲区大小为512个样本。 But I think this is really depending on the Asio hardware used. 但我认为这实际上取决于使用的Asio硬件。 The most important for me was to be sure to have input and output in sync. 对我来说最重要的是确保输入和输出同步。

To compare: If I used WaveIn/WaveOutEvent in the same way, I can reach nearly the same latency (on same cheap hardware) but since my tests were on two separate sound devices too, the input buffer duration increases after some time due to some drops or nonsynchronous audio clocks ;) For reaching the very low latency even when using in a WPF application I had to patch WaveOutEvent class to increase priority of playing thread to highest possible, this helps 'against' most of the possible GC interruptions. 比较:如果我以相同的方式使用WaveIn / WaveOutEvent,我可以达到几乎相同的延迟(在相同的廉价硬件上),但由于我的测试也在两个独立的声音设备上,输入缓冲持续时间在一段时间后由于一些增加丢弃或非同步音频时钟;)为了达到非常低的延迟,即使在WPF应用程序中使用我也必须修补WaveOutEvent类以提高播放线程的优先级,这有助于“对抗”大多数可能的GC中断。

Currently it seems with using Asio interface I have sorted out this GC problem at all. 目前看来,使用Asio接口我已经解决了这个GC问题。

Hope this helps. 希望这可以帮助。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM