[英]NAudio - wave split and combine wave in realtime
我正在使用多输入声卡,我想实现多个输入的现场混音。 所有输入都是立体声的,因此我需要首先将它们拆分,混合选择的通道并将其作为单声道流提供。
目标将是这样的混合Channel1 [left] + Channel3 [right] + Channel4 [right]->单流。
我已经实现了这样的流程链:
1) WaveIn- >为每个通道创建BufferedWaveProvider- >使用wavein.DataAvailable + = {buffwavprovider [channel] .AddSamples(...) ...向每个BufferedWaveProvider添加样本(仅是当前通道的样本)一个包含多个BufferdWaveProvider的漂亮列表。 此处的分割音频部分已正确实现。
2)选择多个BufferedWaveProviders并将它们提供给MixingWaveProvider32 。 然后创建一个WaveStream(使用WaveMixerStream32和IWaveProvider )。
3)MultiChannelToMonoStream接收该WaveStream并生成混音。 这也有效。
但是结果是,音频被切碎了。 就像缓冲区有些麻烦...。
这是解决此问题的正确方法,还是周围有更好的解决方案?
编辑-添加代码:
public class AudioSplitter
{
public List<NamedBufferedWaveProvider> WaveProviders { private set; get; }
public string Name { private set; get; }
private WaveIn _wavIn;
private int bytes_per_sample = 4;
/// <summary>
/// Splits up one WaveIn into one BufferedWaveProvider for each channel
/// </summary>
/// <param name="wavein"></param>
/// <returns></returns>
public AudioSplitter(WaveIn wavein, string name)
{
if (wavein.WaveFormat.Encoding != WaveFormatEncoding.IeeeFloat)
throw new Exception("Format must be IEEE float");
WaveProviders = new List<NamedBufferedWaveProvider>(wavein.WaveFormat.Channels);
Name = name;
_wavIn = wavein;
_wavIn.StartRecording();
var outFormat = NAudio.Wave.WaveFormat.CreateIeeeFloatWaveFormat(wavein.WaveFormat.SampleRate, 1);
for (int i = 0; i < wavein.WaveFormat.Channels; i++)
{
WaveProviders.Add(new NamedBufferedWaveProvider(outFormat) { DiscardOnBufferOverflow = true, Name = Name + "_" + i });
}
bytes_per_sample = _wavIn.WaveFormat.BitsPerSample / 8;
wavein.DataAvailable += Wavein_DataAvailable;
}
/// <summary>
/// add samples for each channel to bufferedwaveprovider
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
private void Wavein_DataAvailable(object sender, WaveInEventArgs e)
{
int channel = 0;
byte[] buffer = e.Buffer;
for (int i = 0; i < e.BytesRecorded - bytes_per_sample; i = i + bytes_per_sample)
{
byte[] channel_buffer = new byte[bytes_per_sample];
for (int j = 0; j < bytes_per_sample; j++)
{
channel_buffer[j] = buffer[i + j];
}
WaveProviders[channel].AddSamples(channel_buffer, 0, channel_buffer.Length);
channel++;
if (channel >= _wavIn.WaveFormat.Channels)
channel = 0;
}
}
}
为每个通道使用Audiosplitter会提供一个缓冲波提供者列表(单32位浮点数)。
var mix = new MixingWaveProvider32(_waveProviders);
var wps = new WaveProviderToWaveStream(mix);
MultiChannelToMonoStream mms = new MultiChannelToMonoStream(wps);
new Thread(() =>
{
byte[] buffer = new byte[4096];
while (mms.Read(buffer, 0, buffer.Length) > 0 && isrunning)
{
using (FileStream fs = new FileStream("C:\\temp\\audio\\mono_32.wav", FileMode.Append, FileAccess.Write))
{
fs.Write(buffer, 0, buffer.Length);
}
}
}).Start();
还有一些空间需要优化,但是基本上可以完成工作:
private void Wavein_DataAvailable(object sender, WaveInEventArgs e)
{
int channel = 0;
byte[] buffer = e.Buffer;
List<List<byte>> channelbuffers = new List<List<byte>>();
for (int c = 0; c < _wavIn.WaveFormat.Channels; c++)
{
channelbuffers.Add(new List<byte>());
}
for (int i = 0; i < e.BytesRecorded; i++)
{
var byteList = channelbuffers[channel];
byteList.Add(buffer[i]);
if (i % bytes_per_sample == bytes_per_sample - 1)
channel++;
if (channel >= _wavIn.WaveFormat.Channels)
channel = 0;
}
for (int j = 0; j < channelbuffers.Count; j++)
{
WaveProviders[j].AddSamples(channelbuffers[j].ToArray(), 0, channelbuffers[j].Count());
}
}
我们需要为每个通道提供一个WaveProvider(WaveProviders [j])。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.