简体   繁体   English

C#中的ushort数组压缩

[英]ushort array compression in C#

I've got a ushort array (actually an image coming from a camera) that I'd like to lossless compress before persistent storage. 我有一个ushort数组(实际上是来自相机的图像),我希望在持久存储之前进行无损压缩。 I'm using the GZipStream function provided in System.IO.Compression.GZipStream. 我正在使用System.IO.Compression.GZipStream中提供的GZipStream函数。 This approach, to my knowledge, requires that I convert the ushort array to a byte array. 据我所知,这种方法要求我将ushort数组转换为字节数组。 My solution appears to function properly, but just isn't as quick as I'd like. 我的解决方案似乎可以正常运行,但是并没有我想要的那么快。 The images are about 2 Mbytes in raw size, and the compress time ranges (on my slow machine) 200 - 400 msecs, and decompress time ranges 100 - 200 msecs. 图像的原始大小约为2 MB,并且压缩时间范围(在我的慢速计算机上)为200-400毫秒,而解压缩时间范围为100-200毫秒。 Looking for suggestions for improving my performance. 寻找改善我的表现的建议。

public static class Zip
    {

        public static ushort[] Decompress_ByteToShort(byte[] zippedData)
        {
            byte[] decompressedData = null;            

            using (MemoryStream outputStream = new MemoryStream())
            {
                using (MemoryStream inputStream = new MemoryStream(zippedData))
                {
                    using (GZipStream zip = new GZipStream(inputStream, CompressionMode.Decompress))
                    {
                        zip.CopyTo(outputStream);
                    }
                }
                decompressedData = outputStream.ToArray();
            }

            ushort[] decompressShort = new ushort[decompressedData.Length / sizeof(ushort)];
            Buffer.BlockCopy(decompressedData, 0, decompressShort, 0, decompressedData.Length);

            return decompressShort;
        }



        public static byte[] Compress_ShortToByte(ushort[] plainData)
        {
            byte[] compressesData = null;

            byte[] uncompressedData = new byte[plainData.Length * sizeof(ushort)];
            Buffer.BlockCopy(plainData, 0, uncompressedData, 0, plainData.Length * sizeof(ushort));

            using (MemoryStream outputStream = new MemoryStream())
            {
                using (GZipStream zip = new GZipStream(outputStream, CompressionMode.Compress))
                {
                    zip.Write(uncompressedData, 0, uncompressedData.Length);
                }
                //Dont get the MemoryStream data before the GZipStream is closed 
                //since it doesn’t yet contain complete compressed data.
                //GZipStream writes additional data including footer information when its been disposed
                compressesData = outputStream.ToArray();
            }

            return compressesData;
        }

    }

The first problem in your approach I see is that you are using byte arrays instead of direcly loading and writing to files. 我看到的方法中的第一个问题是,您正在使用字节数组,而不是直接加载和写入文件。 Using a smaller temporary buffer and reading\\writing to streams and files directly in chunks should be much faster. 使用较小的临时缓冲区并直接在块中直接读取/写入流和文件应该更快。

Here I propose some functions and overloads you can use to decompress from byte arrays, to byte arrays, from stream, to stream, from file and to file. 在这里,我提出了一些函数和重载,可用于从字节数组解压缩到字节数组,从流,到流,从文件和文件解压缩。

The performance improvement should be from 10% to 20%. 性能提高应该在10%到20%之间。 Try to adjust the constants as needed. 尝试根据需要调整常数。 I used DeflateStream instead of GZipStream, this increases the performance a bit. 我用DeflateStream代替了GZipStream,这提高了性能。 You can go back to a GZipStream if you prefer. 如果愿意,可以返回GZipStream。

I tried just the byte to ushort and ushort to byte[] version of the code and it is about 10% faster. 我只尝试将字节转换为ushort,将ushort转换为代码的byte []版本,它的速度提高了约10%。 Accessing directly to files instead of loading it to a big buffer should increase the performance even more. 直接访问文件而不是将文件加载到大缓冲区中应该可以进一步提高性能。

WARNING: This approach of reading and writing images in this way is not little-endian/big-endian agnostic - it means that a file saved from a Intel/AMD machine is not compatible with an ARM machine, for example in some tablets! 警告:这种以这种方式读取和写入图像的方法与Little-Endian / Big-Endian无关,这意味着从Intel / AMD机器保存的文件与ARM机器不兼容,例如在某些平板电脑中! Just as a side note :) 恰如其分:)

    /// <summary>The average file size, used to preallocate the right amount of memory for compression.</summary>
    private const int AverageFileSize = 100000;

    /// <summary>The default size of the buffer used to convert data. WARNING: Must be a multiple of 2!</summary>
    private const int BufferSize = 32768;


    /// <summary>Decompresses a byte array to unsigned shorts.</summary>
    public static ushort[] Decompress_ByteToShort(byte[] zippedData)
    {
        using (var inputStream = new MemoryStream(zippedData))
            return Decompress_File(inputStream);
    }

    /// <summary>Decompresses a file to unsigned shorts.</summary>
    public static ushort[] Decompress_File(string inputFilePath)
    {
        using (var stream = new FileStream(inputFilePath, FileMode.Open, FileAccess.Read))
            return Decompress_File(stream);
    }

    /// <summary>Decompresses a file stream to unsigned shorts.</summary>
    public static ushort[] Decompress_File(Stream zippedData)
    {
        using (var zip = new DeflateStream(zippedData, CompressionMode.Decompress, true))
        {
            // Our temporary buffer.
            var buffer = new byte[BufferSize];

            // Read the number of bytes, written initially as header in the file.
            zip.Read(buffer, 0, sizeof(int));
            var resultLength = BitConverter.ToInt32(buffer, 0);

            // Creates the result array
            var result = new ushort[resultLength];

            // Decompress the file chunk by chunk
            var resultOffset = 0;
            for (; ; )
            {
                // Read a chunk of data
                var count = zip.Read(buffer, 0, BufferSize);
                if (count <= 0)
                    break;

                // Copy a piece of the decompressed buffer
                Buffer.BlockCopy(buffer, 0, result, resultOffset, count);

                // Advance counter
                resultOffset += count;
            }

            return result;
        }
    }

    /// <summary>Compresses an ushort array to a file array.</summary>
    public static byte[] Compress_ShortToByte(ushort[] plainData)
    {
        using (var outputStream = new MemoryStream(AverageFileSize))
        {
            Compress_File(plainData, outputStream);
            return outputStream.ToArray();
        }
    }

    /// <summary>Compresses an ushort array directly to a file.</summary>
    public static void Compress_File(ushort[] plainData, string outputFilePath)
    {
        using (var stream = new FileStream(outputFilePath, FileMode.OpenOrCreate, FileAccess.Write))
            Compress_File(plainData, stream);
    }

    /// <summary>Compresses an ushort array directly to a file stream.</summary>
    public static void Compress_File(ushort[] plainData, Stream outputStream)
    {
        using (var zip = new DeflateStream(outputStream, CompressionMode.Compress, true))
        {
            // Our temporary buffer.
            var buffer = new byte[BufferSize];

            // Writes the length of the plain data
            zip.Write(BitConverter.GetBytes(plainData.Length), 0, sizeof(int));

            var inputOffset = 0;
            var availableBytes = plainData.Length * sizeof(ushort);
            while (availableBytes > 0)
            {
                // Compute the amount of bytes to copy.
                var bytesCount = Math.Min(BufferSize, availableBytes);

                // Copy a chunk of plain data into the temporary buffer
                Buffer.BlockCopy(plainData, inputOffset, buffer, 0, bytesCount);

                // Write the buffer
                zip.Write(buffer, 0, bytesCount);

                // Advance counters
                inputOffset += bytesCount;
                availableBytes -= bytesCount;
            }
        }
    }

There is GZipStream constructor (Stream, CompressionLevel) you can change CompressionLevel to speed up compression there is level which say Fastest in this enumeration. 有GZipStream构造函数(Stream,CompressionLevel),您可以更改CompressionLevel以加快压缩Fastest ,该枚举中有一个级别表示“ Fastest

Links to relevant documentation: 链接到相关文档:
http://msdn.microsoft.com/pl-pl/library/hh137341(v=vs.110).aspx http://msdn.microsoft.com/pl-pl/library/hh137341(v=vs.110).aspx
http://msdn.microsoft.com/pl-pl/library/system.io.compression.compressionlevel(v=vs.110).aspx http://msdn.microsoft.com/pl-pl/library/system.io.compression.compressionlevel(v=vs.110).aspx

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM