简体   繁体   中英

Efficiently convert audio bytes - byte[] to short[]

I'm trying to use the XNA microphone to capture audio and pass it to an API I have that analyses the data for display purposes. However, the API requires the audio data in an array of 16 bit integers. So my question is fairly straight forward; what's the most efficient way to convert the byte array into a short array?

    private void _microphone_BufferReady(object sender, System.EventArgs e)
    {
        _microphone.GetData(_buffer);

        short[] shorts;

        //Convert and pass the 16 bit samples
        ProcessData(shorts);
    }

Cheers, Dave

EDIT : This is what I have come up with and seems to work, but could it be done faster?

    private short[] ConvertBytesToShorts(byte[] bytesBuffer)
    {
        //Shorts array should be half the size of the bytes buffer, as each short represents 2 bytes (16bits)
        short[] shorts = new short[bytesBuffer.Length / 2];

        int currentStartIndex = 0;

        for (int i = 0; i < shorts.Length - 1; i++)
        {
            //Convert the 2 bytes at the currentStartIndex to a short
            shorts[i] = BitConverter.ToInt16(bytesBuffer, currentStartIndex);

            //increment by 2, ready to combine the next 2 bytes in the buffer
            currentStartIndex += 2;
        }

        return shorts;

    }

After reading your update, I can see you need to actually copy a byte array directly into a buffer of shorts, merging bytes. Here's the relevant section from the documentation :

The byte[] buffer format used as a parameter for the SoundEffect constructor, Microphone.GetData method, and DynamicSoundEffectInstance.SubmitBuffer method is PCM wave data. Additionally, the PCM format is interleaved and in little-endian.

Now, if for some weird reason your system has BitConverter.IsLittleEndian == false , then you will need to loop through your buffer, swapping bytes as you go, to convert from little-endian to big-endian. I'll leave the code as an exercise - I am reasonably sure all the XNA systems are little-endian.

For your purposes, you can just copy the buffer directly using Marshal.Copy or Buffer.BlockCopy . Both will give you the performance of the platform's native memory copy operation, which will be extremely fast:

// Create this buffer once and reuse it! Don't recreate it each time!
short[] shorts = new short[_buffer.Length/2];

// Option one:
unsafe
{
    fixed(short* pShorts = shorts)
        Marshal.Copy(_buffer, 0, (IntPtr)pShorts, _buffer.Length);
}

// Option two:
Buffer.BlockCopy(_buffer, 0, shorts, 0, _buffer.Length);

This is a performance question, so: measure it!

It is worth pointing out that for measuring performance in .NET you want to do a release build and run without the debugger attached (this allows the JIT to optimise).

Jodrell's answer is worth commenting on: Using AsParallel is interesting, but it is worth checking if the cost of spinning it up is worth it. (Speculation - measure it to confirm: converting byte to short should be extremely fast, so if your buffer data is coming from shared memory and not a per-core cache, most of your cost will probably be in data transfer not processing.)

Also I am not sure that ToArray is appropriate. First of all, it may not be able to create the correct-sized array directly, having to resize the array as it builds it will make it very slow. Additionally it will always allocate the array - which is not slow itself, but adds a GC cost that you almost certainly don't want.

Edit: Based on your updated question, the code in the rest of this answer is not directly usable, as the format of the data is different. And the technique itself (a loop, safe or unsafe) is not as fast as what you can use. See my other answer for details.

So you want to pre-allocate your array. Somewhere out in your code you want a buffer like this:

short[] shorts = new short[_buffer.Length];

And then simply copy from one buffer to the other:

for(int i = 0; i < _buffer.Length; ++i)
    result[i] = ((short)buffer[i]);

This should be very fast, and the JIT should be clever enough to skip one if not both of the array bounds checks.

And here's how you can do it with unsafe code: (I haven't tested this code, but it should be about right)

unsafe
{
    int length = _buffer.Length;
    fixed(byte* pSrc = _buffer) fixed(short* pDst = shorts)
    {
        byte* ps = pSrc;
        short* pd = pDst;

        while(pd < pd + length)
            *(pd++) = (short)(*(ps++));
    }
}

Now the unsafe version has the disadvantage of requiring /unsafe , and also it may actually be slower because it prevents the JIT from doing various optimisations. Once again: measure it .

(Also you can probably squeeze more performance if you try some permutations on the above examples. Measure it .)

Finally: Are you sure you want the conversion to be (short)sample ? Shouldn't it be something like ((short)sample-128)*256 to take it from unsigned to signed and extend it to the correct bit-width?

The pest PLINQ I could come up with is here.

private short[] ConvertBytesToShorts(byte[] bytesBuffer)
{         
    //Shorts array should be half the size of the bytes buffer, as each short represents 2 bytes (16bits)
    var odd = buffer.AsParallel().Where((b, i) => i % 2 != 0);
    var even = buffer.AsParallell().Where((b, i) => i % 2 == 0);

    return odd.Zip(even, (o, e) => {
        return (short)((o << 8) | e);
    }.ToArray();
}

I'm dubios about the performance but with enough data and processors who knows.

If the conversion operation is wrong ( (short)((o << 8) | e) ) please change to suit.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM