[英]How to compute hash of a large file chunk?
I want to be able to compute the hashes of arbitrarily sized file chunks of a file in C#. 我希望能够在C#中计算文件的任意大小的文件块的哈希值。
eg: Compute the hash of the 3rd gigabyte in 4gb file. 例如:计算4gb文件中第3千兆字节的哈希值。
The main problem is that I don't want to load the entire file at memory, as there could be several files and the offsets could be quite arbitrary. 主要的问题是我不想在内存中加载整个文件,因为可能有几个文件,并且偏移量可能非常随意。
AFAIK, the HashAlgorithm.ComputeHash allows me to either use a byte buffer, of a stream. AFAIK,HashAlgorithm.ComputeHash允许我使用流的字节缓冲区。 The stream would allow me to compute the hash efficiently, but for the entire file, not just for a specific chunk. 流将允许我有效地计算散列,但是对于整个文件,不仅仅是针对特定的块。
I was thinking to create aan alternate FileStream object and pass it to ComputeHash, where I would overload the FileStream methods and have read only for a certain chunk in a file. 我正在考虑创建一个备用FileStream对象并将其传递给ComputeHash,在那里我将重载FileStream方法并且只读取文件中的某个块。
Is there a better solution than this, preferably using the built in C# libraries ? 有没有比这更好的解决方案,最好使用内置的C#库? Thanks. 谢谢。
You should pass in either: 你应该通过:
The second option isn't all that hard, here's a quick LINQPad program I threw together. 第二个选项并不是那么难,这是一个快速的LINQPad程序,我把它放在一起。 Note that it lacks quite a bit of error handling, such as checking that the chunk is actually available (ie. that you're passing in a position and length of the stream that actually exists and doesn't fall off the end of the underlying stream). 请注意,它缺少相当多的错误处理,例如检查块实际可用(即,您正在传递实际存在的流的位置和长度,并且不会从底层的末尾开始流)。
Needless to say, if this should end up as production code I would add a lot of error handling, and write a bunch of unit-tests to ensure all edge-cases are handled correctly. 毋庸置疑,如果这最终应该作为生产代码,我会添加大量的错误处理,并编写一堆单元测试来确保正确处理所有边缘情况。
You would construct the PartialStream
instance for your file like this: 您将为您的文件构造PartialStream
实例,如下所示:
const long gb = 1024 * 1024 * 1024;
using (var fileStream = new FileStream(@"d:\temp\too_long_file.bin", FileMode.Open))
using (var chunk = new PartialStream(fileStream, 2 * gb, 1 * gb))
{
var hash = hashAlgorithm.ComputeHash(chunk);
}
Here's the LINQPad test program: 这是LINQPad测试程序:
void Main()
{
var buffer = Enumerable.Range(0, 256).Select(i => (byte)i).ToArray();
using (var underlying = new MemoryStream(buffer))
using (var partialStream = new PartialStream(underlying, 64, 32))
{
var temp = new byte[1024]; // too much, ensure we don't read past window end
partialStream.Read(temp, 0, temp.Length);
temp.Dump();
// should output 64-95 and then 0's for the rest (64-95 = 32 bytes)
}
}
public class PartialStream : Stream
{
private readonly Stream _UnderlyingStream;
private readonly long _Position;
private readonly long _Length;
public PartialStream(Stream underlyingStream, long position, long length)
{
if (!underlyingStream.CanRead || !underlyingStream.CanSeek)
throw new ArgumentException("underlyingStream");
_UnderlyingStream = underlyingStream;
_Position = position;
_Length = length;
_UnderlyingStream.Position = position;
}
public override bool CanRead
{
get
{
return _UnderlyingStream.CanRead;
}
}
public override bool CanWrite
{
get
{
return false;
}
}
public override bool CanSeek
{
get
{
return true;
}
}
public override long Length
{
get
{
return _Length;
}
}
public override long Position
{
get
{
return _UnderlyingStream.Position - _Position;
}
set
{
_UnderlyingStream.Position = value + _Position;
}
}
public override void Flush()
{
throw new NotSupportedException();
}
public override long Seek(long offset, SeekOrigin origin)
{
switch (origin)
{
case SeekOrigin.Begin:
return _UnderlyingStream.Seek(_Position + offset, SeekOrigin.Begin) - _Position;
case SeekOrigin.End:
return _UnderlyingStream.Seek(_Length + offset, SeekOrigin.Begin) - _Position;
case SeekOrigin.Current:
return _UnderlyingStream.Seek(offset, SeekOrigin.Current) - _Position;
default:
throw new ArgumentException("origin");
}
}
public override void SetLength(long length)
{
throw new NotSupportedException();
}
public override int Read(byte[] buffer, int offset, int count)
{
long left = _Length - Position;
if (left < count)
count = (int)left;
return _UnderlyingStream.Read(buffer, offset, count);
}
public override void Write(byte[] buffer, int offset, int count)
{
throw new NotSupportedException();
}
}
You can use TransformBlock
and TransformFinalBlock
directly. 您可以直接使用TransformBlock
和TransformFinalBlock
。 That's pretty similar to what HashAlgorithm.ComputeHash
does internally. 这与HashAlgorithm.ComputeHash
在内部的作用非常相似。
Something like: 就像是:
using(var hashAlgorithm = new SHA256Managed())
using(var fileStream = new File.OpenRead(...))
{
fileStream.Position = ...;
long bytesToHash = ...;
var buf = new byte[4 * 1024];
while(bytesToHash > 0)
{
var bytesRead = fileStream.Read(buf, 0, (int)Math.Min(bytesToHash, buf.Length));
hashAlgorithm.TransformBlock(buf, 0, bytesRead, null, 0);
bytesToHash -= bytesRead;
if(bytesRead == 0)
throw new InvalidOperationException("Unexpected end of stream");
}
hashAlgorithm.TransformFinalBlock(buf, 0, 0);
var hash = hashAlgorithm.Hash;
return hash;
};
Your suggestion - passing in a restricted access wrapper for your FileStream
- is the cleanest solution. 您的建议 - 为您的FileStream
传递受限制的访问包装 - 是最干净的解决方案。 Your wrapper should defer everything to the wrapped Stream
except the Length
and Position
properties. 除了 “ Length
和“ Position
属性之外 ,您的包装器应将所有内容推迟到包装的Stream
。
How? 怎么样? Simply create a class that inherits from Stream
. 只需创建一个继承自Stream
的类。 Make the constructor take: 使构造函数采取:
Stream
(in your case, a FileStream
) 你的源Stream
(在你的情况下,一个FileStream
) As an extension - this is a list of all the Streams
that are available http://msdn.microsoft.com/en-us/library/system.io.stream%28v=vs.100%29.aspx#inheritanceContinued 作为扩展 - 这是可用的所有Streams
的列表http://msdn.microsoft.com/en-us/library/system.io.stream%28v=vs.100%29.aspx#inheritanceContinued
To easily compute the hash of a chunk of a larger stream, use these two methods: 要轻松计算较大流的块的哈希值,请使用以下两种方法:
Here's a LINQPad program that demonstrates: 这是一个LINQPad程序,演示:
void Main()
{
const long gb = 1024 * 1024 * 1024;
using (var stream = new FileStream(@"d:\temp\largefile.bin", FileMode.Open))
{
stream.Position = 2 * gb; // 3rd gb-chunk
byte[] buffer = new byte[32768];
long amount = 1 * gb;
using (var hashAlgorithm = SHA1.Create())
{
while (amount > 0)
{
int bytesRead = stream.Read(buffer, 0,
(int)Math.Min(buffer.Length, amount));
if (bytesRead > 0)
{
amount -= bytesRead;
if (amount > 0)
hashAlgorithm.TransformBlock(buffer, 0, bytesRead,
buffer, 0);
else
hashAlgorithm.TransformFinalBlock(buffer, 0, bytesRead);
}
else
throw new InvalidOperationException();
}
hashAlgorithm.Hash.Dump();
}
}
}
To answer your original question ( "Is there a better solution..." ): 回答你的原始问题( “有更好的解决方案......” ):
Not that I know of. 从来没听说过。
This seems to be a very special, non-trivial task, so a little extra work might be involved anyway. 这似乎是一项非常特殊的,非平凡的任务,因此无论如何都可能涉及一些额外的工作。 I think your approach of using a custom Stream -class goes in the right direction, I'd probably do exactly the same. 我认为你使用自定义Stream- class的方法是正确的方向,我可能会做同样的事情。
And Gusdor and xander have already provided very helpful information on how to implement that — good job guys! Gusdor和xander已经提供了有关如何实现这些的非常有用的信息 - 好工作的人!
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.