简体   繁体   English

NAudio-实时拆分和合并波形

[英]NAudio - wave split and combine wave in realtime

I am working with a multi input soundcard and I want to achieve live mixing of multiple inputs. 我正在使用多输入声卡,我想实现多个输入的现场混音。 All the inputs are stereo, so I need to split them in first place, mix a selection of channel and provide them as mono stream. 所有输入都是立体声的,因此我需要首先将它们拆分,混合选择的通道并将其作为单声道流提供。

The goal would be something like this mix Channel1[left] + Channel3[right] + Channel4[right] -> mono stream. 目标将是这样的混合Channel1 [left] + Channel3 [right] + Channel4 [right]->单流。

I have already implemented a process chain like this: 我已经实现了这样的流程链:

1) WaveIn -> create BufferedWaveProvider for each channel -> add Samples (just the ones for current channel) to each BufferedWaveProvider by using wavein.DataAvailable += { buffwavprovider[channel].AddSamples(...) ... This gives me a nice list of multiple BufferdWaveProvider. 1) WaveIn- >为每个通道创建BufferedWaveProvider- >使用wavein.DataAvailable + = {buffwavprovider [channel] .AddSamples(...) ...向每个BufferedWaveProvider添加样本(仅是当前通道的样本一个包含多个BufferdWaveProvider的漂亮列表。 The splitting audio part here is implemented correctly. 此处的分割音频部分已正确实现。

2) Select multiple BufferedWaveProviders and give them to MixingWaveProvider32 . 2)选择多个BufferedWaveProviders并将它们提供给MixingWaveProvider32 Then create a WaveStream (using WaveMixerStream32 and IWaveProvider ). 然后创建一个WaveStream(使用WaveMixerStream32和IWaveProvider )。

3) A MultiChannelToMonoStream takes that WaveStream and generates a mixdown. 3)MultiChannelToMonoStream接收该WaveStream并生成混音。 This also works. 这也有效。

But result is, that audio is chopped. 但是结果是,音频被切碎了。 Like some trouble with the buffer.... 就像缓冲区有些麻烦...。

Is this the correct way to handle this problem, or is there a way better solution around? 这是解决此问题的正确方法,还是周围有更好的解决方案?

edit - code added: 编辑-添加代码:

public class AudioSplitter
   {
      public List<NamedBufferedWaveProvider> WaveProviders { private set; get; }
      public string Name { private set; get; }
      private WaveIn _wavIn;
      private int bytes_per_sample = 4;

      /// <summary>
      /// Splits up one WaveIn into one BufferedWaveProvider for each channel
      /// </summary>
      /// <param name="wavein"></param>
      /// <returns></returns>
      public AudioSplitter(WaveIn wavein, string name)
      {
         if (wavein.WaveFormat.Encoding != WaveFormatEncoding.IeeeFloat)
            throw new Exception("Format must be IEEE float");


         WaveProviders = new List<NamedBufferedWaveProvider>(wavein.WaveFormat.Channels);

         Name = name;
         _wavIn = wavein;
         _wavIn.StartRecording();
         var outFormat = NAudio.Wave.WaveFormat.CreateIeeeFloatWaveFormat(wavein.WaveFormat.SampleRate, 1);

         for (int i = 0; i < wavein.WaveFormat.Channels; i++)
         {
            WaveProviders.Add(new NamedBufferedWaveProvider(outFormat) { DiscardOnBufferOverflow = true, Name = Name + "_" + i });
         }

         bytes_per_sample = _wavIn.WaveFormat.BitsPerSample / 8;
         wavein.DataAvailable += Wavein_DataAvailable;
      }


      /// <summary>
      /// add samples for each channel to bufferedwaveprovider
      /// </summary>
      /// <param name="sender"></param>
      /// <param name="e"></param>
      private void Wavein_DataAvailable(object sender, WaveInEventArgs e)
      {
         int channel = 0;
         byte[] buffer = e.Buffer;
         for (int i = 0; i < e.BytesRecorded - bytes_per_sample; i = i + bytes_per_sample)
         {
            byte[] channel_buffer = new byte[bytes_per_sample];

            for (int j = 0; j < bytes_per_sample; j++)
            {
               channel_buffer[j] = buffer[i + j];
            }

            WaveProviders[channel].AddSamples(channel_buffer, 0, channel_buffer.Length);

            channel++;

            if (channel >= _wavIn.WaveFormat.Channels)
               channel = 0;

         }

      }
   }

Using the Audiosplitter for each channel gives a list of buffered wave provider (mono 32bit float). 为每个通道使用Audiosplitter会提供一个缓冲波提供者列表(单32位浮点数)。

 var mix = new MixingWaveProvider32(_waveProviders);
 var wps = new WaveProviderToWaveStream(mix);
 MultiChannelToMonoStream mms = new MultiChannelToMonoStream(wps);

 new Thread(() =>
  {
     byte[] buffer = new byte[4096];

     while (mms.Read(buffer, 0, buffer.Length) > 0 && isrunning)
     {


        using (FileStream fs = new FileStream("C:\\temp\\audio\\mono_32.wav", FileMode.Append, FileAccess.Write))
        {
           fs.Write(buffer, 0, buffer.Length);
        }

     }
  }).Start();

there is some space left for optimization, but basically this gets the job done: 还有一些空间需要优化,但是基本上可以完成工作:

  private void Wavein_DataAvailable(object sender, WaveInEventArgs e)
      {
         int channel = 0;
         byte[] buffer = e.Buffer;

         List<List<byte>> channelbuffers = new List<List<byte>>();
         for (int c = 0; c < _wavIn.WaveFormat.Channels; c++)
         {
            channelbuffers.Add(new List<byte>());
         }

         for (int i = 0; i < e.BytesRecorded; i++)
         {
            var byteList = channelbuffers[channel];

            byteList.Add(buffer[i]);

            if (i % bytes_per_sample == bytes_per_sample - 1)
               channel++;

            if (channel >= _wavIn.WaveFormat.Channels)
               channel = 0;
         }

         for (int j = 0; j < channelbuffers.Count; j++)
         {
            WaveProviders[j].AddSamples(channelbuffers[j].ToArray(), 0, channelbuffers[j].Count());
         }

      }

We need to provide a WaveProvider (WaveProviders[j]) for each channel. 我们需要为每个通道提供一个WaveProvider(WaveProviders [j])。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM