[英]Parallel compression C#
是否可以使用Parallel.Foreach
或其他方式優化此代碼?
using (var zipStream = new ZipOutputStream(OpenWriteArchive()))
{
zipStream.CompressionLevel = CompressionLevel.Level9;
foreach (var document in docuemnts)
{
zipStream.PutNextEntry(GetZipEntryName(type));
using (var targetStream = new MemoryStream()) // document stream
{
DocumentHelper.SaveDocument(document.Value, targetStream, type);
targetStream.Position = 0; targetStream.CopyTo(zipStream);
}
GC.Collect();
};
}
問題是DotNetZip和SharpZipLib的ZipOutputStream
不支持位置更改或搜索。
從多個線程寫入zip流會導致錯誤。 將結果流累積到ConcurrentStack中也是不可能的,因為應用程序可以處理1000多個文檔,並且應該將流實時壓縮並保存到雲中。
有什么辦法解決這個問題?
通過使用ProducerConsumerQueue
(生產者-消費者模式)解決。
using (var queue = new ProducerConsumerQueue<byte[]>(HandlerDelegate))
{
Parallel.ForEach(documents, document =>
{
using (var documentStream = new MemoryStream())
{
// saving document here ...
queue.EnqueueTask(documentStream.ToArray());
}
});
}
protected void HandlerDelegate(byte[] content)
{
ZipOutputStream.PutNextEntry(Guid.NewGuid() + ".pdf");
using (var stream = new MemoryStream(content))
{
stream.Position = 0; stream.CopyTo(ZipOutputStream);
}
}
嘗試在並行foreach中清除zipstream,例如:
Parallel.ForEach(docuemnts, (document) =>
{
using (var zipStream = new ZipOutputStream(OpenWriteArchive()))
{
zipStream.CompressionLevel = CompressionLevel.Level9;
zipStream.PutNextEntry(GetZipEntryName(type));
using (var targetStream = new MemoryStream()) // document stream
{
DocumentHelper.SaveDocument(document.Value, targetStream, type);
targetStream.Position = 0; targetStream.CopyTo(zipStream);
}
GC.Collect();
}
});
再見!
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.