简体   繁体   English

样本中的 Naudio 单元

[英]Naudio unit from samples

I've been looking at loads of web pages but none have been able to give an indication of what unit values put in the value array in the ReadInts function in the below code are.我一直在查看 web 页面的负载,但没有一个能够指示在下面的代码中的ReadInts function 中的值数组中放入了哪些单位值。

I'm using NAudio to get the samples from an audio interface and can't work out if the values are mv or something else.我正在使用 NAudio 从音频接口获取样本,如果值是 mv 或其他东西,我无法确定。

I assume i could just dump the memory stream into a file and from that use it like a normal wave file but not 100 % on that.我假设我可以将 memory stream 转储到一个文件中,然后像普通波形文件一样使用它,但不是 100%。

The data contains multiple channels one for reference and then others which i need to compare to the reference channel for what i'm doing however this doesn't seem possible without know what the unit is.数据包含多个通道,一个用于参考,然后是其他通道,我需要将其与参考通道进行比较以了解我正在做的事情,但是如果不知道单位是什么,这似乎是不可能的。

    public List<byte> Data { get; protected set; }
    public void CreateWaveIn()
    {
        wvin = new NAudio.Wave.WaveInEvent();
        wvin.DeviceNumber = _port;
        wvin.WaveFormat = new NAudio.Wave.WaveFormat(ChosenSampleRate, BitRate, NumberOfChannels);
        wvin.DataAvailable += OnDataAvailable;
        wvin.BufferMilliseconds = 20;
    }

    private void OnDataAvailable(object sender, NAudio.Wave.WaveInEventArgs args)
    {
        int bytesPerSample = (wvin.WaveFormat.BitsPerSample / 8) * wvin.WaveFormat.Channels;
        for (int index = 0; index < args.BytesRecorded; index++)
        {
            if (ObtaingingData)
            {
                Data.Add((args.Buffer[index]));
            }
        }
    }

    public override bool StopObtaingData()
    {
        wvin.StopRecording();
        wvin.DataAvailable -= OnDataAvailable;
        wvin = null;
        ObtaingingData = false;
        WaveStuff();
        return true;
    }

    public WaveStuff()
    {
            using (MemoryStream wave = new MemoryStream())
            {
                wave.Write(Data.ToArray(), 0, Data.Count);
                wave.Position = 0;
                wave.Flush();

                wave.SetLength(wave.Length);

                int endIndex = Data.Count - 1;
                int startIndex = 0;
                int blockCount = (endIndex - startIndex) / fft.BlockSize;

                // endIndex = startIndex + blockCount * fft.BlockSize;

                // Accumulate the FFT of each block
                int blockID = 0;

                // Silently read the first unused samples
                if (wave.CanRead && startIndex > 0)
                {
                    ReadInts(startIndex, channel, wave);
                }

                while (wave.CanRead && blockID < blockCount)
                {
                    int[] data;

                    data = ReadInts(fft.BlockSize, channel, wave);

                    if (data.Length == 0)
                    {
                        break;
                    }

                    fft.Data = data;
                    double[] tmp = fft.GetMagnitudeSpectrum();

                    for (int j = 0; j < tmp.Length; j++)
                    {
                        result[j] += tmp[j];
                    }

                    blockID++;
                    fftCount++;
                }
                wave.Dispose();
            }
    }

    public int[] ReadInts(int count, int selectedChannel, MemoryStream wave)
    {
        int bytesPerSample = BitRate / 8;
        int offset = bytesPerSample * NumberOfChannels;
        int channelOffset = bytesPerSample * selectedChannel;

        byte[] bytes = new byte[count * offset];
        byte[] tempBytes = new byte[4];
        count = Read(bytes, 0, bytes.Length, wave) / offset;
        int[] values = new int[count];
        //   double[] values = new double[count];
        for (int i = 0; i < count; i++)
        {
            switch (BitRate)
            {
                case 8:
                    values[i] = bytes[(i * offset) + channelOffset];
                    break;
                case 16:
                    values[i] = BitConverter.ToInt16(bytes, (i * offset) + channelOffset);
                    break;
                case 24:
                    Array.Copy(bytes, (i * offset) + channelOffset, tempBytes, 1, 3);
                    values[i] = BitConverter.ToInt32(tempBytes, 0) >> 8;
                    break;
                case 32:
                    values[i] = BitConverter.ToInt32(bytes, (i * offset) + channelOffset);
                    break;
                default:
                    break;
            }
        }

        return values;
    }

I would presume the audio is using Pulse Code Modulation , ie each sample represents a pressure level at a given instance of time for a given channel.我假设音频正在使用Pulse Code Modulation ,即每个样本代表给定通道在给定时间实例的压力水平。 This would usually not be calibrated or linear, meaning that while a value of 100 would be a higher pressure than 10, there is no telling how much higher or what exact pressure it represents.这通常不是校准的或线性的,这意味着虽然 100 的值会比 10 的压力高,但不知道它代表了多少或确切的压力。

If the channels are recorded with the same equipment and settings you should simply be able to subtract samples from one channel from the other.如果使用相同的设备和设置记录通道,您应该能够简单地从一个通道中减去另一个通道的样本。 Otherwise you might want to normalize the samples first, i..e find the largest and smallest value for each channel, and re-scale one or the other, or both, to use the same range.否则,您可能希望首先对样本进行归一化,即找到每个通道的最大值和最小值,然后重新缩放一个或另一个,或两者,以使用相同的范围。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM