[英]NAudio - wave split and combine wave in realtime
我正在使用多輸入聲卡,我想實現多個輸入的現場混音。 所有輸入都是立體聲的,因此我需要首先將它們拆分,混合選擇的通道並將其作為單聲道流提供。
目標將是這樣的混合Channel1 [left] + Channel3 [right] + Channel4 [right]->單流。
我已經實現了這樣的流程鏈:
1) WaveIn- >為每個通道創建BufferedWaveProvider- >使用wavein.DataAvailable + = {buffwavprovider [channel] .AddSamples(...) ...向每個BufferedWaveProvider添加樣本(僅是當前通道的樣本)一個包含多個BufferdWaveProvider的漂亮列表。 此處的分割音頻部分已正確實現。
2)選擇多個BufferedWaveProviders並將它們提供給MixingWaveProvider32 。 然后創建一個WaveStream(使用WaveMixerStream32和IWaveProvider )。
3)MultiChannelToMonoStream接收該WaveStream並生成混音。 這也有效。
但是結果是,音頻被切碎了。 就像緩沖區有些麻煩...。
這是解決此問題的正確方法,還是周圍有更好的解決方案?
編輯-添加代碼:
public class AudioSplitter
{
public List<NamedBufferedWaveProvider> WaveProviders { private set; get; }
public string Name { private set; get; }
private WaveIn _wavIn;
private int bytes_per_sample = 4;
/// <summary>
/// Splits up one WaveIn into one BufferedWaveProvider for each channel
/// </summary>
/// <param name="wavein"></param>
/// <returns></returns>
public AudioSplitter(WaveIn wavein, string name)
{
if (wavein.WaveFormat.Encoding != WaveFormatEncoding.IeeeFloat)
throw new Exception("Format must be IEEE float");
WaveProviders = new List<NamedBufferedWaveProvider>(wavein.WaveFormat.Channels);
Name = name;
_wavIn = wavein;
_wavIn.StartRecording();
var outFormat = NAudio.Wave.WaveFormat.CreateIeeeFloatWaveFormat(wavein.WaveFormat.SampleRate, 1);
for (int i = 0; i < wavein.WaveFormat.Channels; i++)
{
WaveProviders.Add(new NamedBufferedWaveProvider(outFormat) { DiscardOnBufferOverflow = true, Name = Name + "_" + i });
}
bytes_per_sample = _wavIn.WaveFormat.BitsPerSample / 8;
wavein.DataAvailable += Wavein_DataAvailable;
}
/// <summary>
/// add samples for each channel to bufferedwaveprovider
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
private void Wavein_DataAvailable(object sender, WaveInEventArgs e)
{
int channel = 0;
byte[] buffer = e.Buffer;
for (int i = 0; i < e.BytesRecorded - bytes_per_sample; i = i + bytes_per_sample)
{
byte[] channel_buffer = new byte[bytes_per_sample];
for (int j = 0; j < bytes_per_sample; j++)
{
channel_buffer[j] = buffer[i + j];
}
WaveProviders[channel].AddSamples(channel_buffer, 0, channel_buffer.Length);
channel++;
if (channel >= _wavIn.WaveFormat.Channels)
channel = 0;
}
}
}
為每個通道使用Audiosplitter會提供一個緩沖波提供者列表(單32位浮點數)。
var mix = new MixingWaveProvider32(_waveProviders);
var wps = new WaveProviderToWaveStream(mix);
MultiChannelToMonoStream mms = new MultiChannelToMonoStream(wps);
new Thread(() =>
{
byte[] buffer = new byte[4096];
while (mms.Read(buffer, 0, buffer.Length) > 0 && isrunning)
{
using (FileStream fs = new FileStream("C:\\temp\\audio\\mono_32.wav", FileMode.Append, FileAccess.Write))
{
fs.Write(buffer, 0, buffer.Length);
}
}
}).Start();
還有一些空間需要優化,但是基本上可以完成工作:
private void Wavein_DataAvailable(object sender, WaveInEventArgs e)
{
int channel = 0;
byte[] buffer = e.Buffer;
List<List<byte>> channelbuffers = new List<List<byte>>();
for (int c = 0; c < _wavIn.WaveFormat.Channels; c++)
{
channelbuffers.Add(new List<byte>());
}
for (int i = 0; i < e.BytesRecorded; i++)
{
var byteList = channelbuffers[channel];
byteList.Add(buffer[i]);
if (i % bytes_per_sample == bytes_per_sample - 1)
channel++;
if (channel >= _wavIn.WaveFormat.Channels)
channel = 0;
}
for (int j = 0; j < channelbuffers.Count; j++)
{
WaveProviders[j].AddSamples(channelbuffers[j].ToArray(), 0, channelbuffers[j].Count());
}
}
我們需要為每個通道提供一個WaveProvider(WaveProviders [j])。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.