简体   繁体   English

C#HTTP PUT请求代码问题

[英]Problem with C# HTTP PUT request code

I am trying to send a file to S3 via a PUT request URL that Amazon S3 has already generated for me. 我正在尝试通过Amazon S3已经为我生成的PUT请求URL将文件发送到S3。

My code works fine for small files, but it errors out on large files (>100 mb) after a few minutes of sending. 我的代码适用于小文件,但在发送几分钟后它在大文件(> 100 mb)上出错。

There error is: The request was aborted: The request was canceled. 错误是: 请求已中止:请求已取消。 at System.Net.ConnectStream.InternalWrite(Boolean async, Byte[] buffer, Int32 offset, Int32 size, AsyncCallback callback, Object state) at System.Net.ConnectStream.Write(Byte[] buffer, Int32 offset, Int32 size) 在System.Net.ConnectStream.InternalWrite(布尔异步,Byte []缓冲区,Int32偏移,Int32大小,AsyncCallback回调,对象状态)在System.Net.ConnectStream.Write(Byte []缓冲区,Int32偏移,Int32大小)

Can someone please tell me what is wrong with my code that is stopping it from sending large files? 有人可以告诉我我的代码有什么问题阻止它发送大文件吗? It is not due to the Amazon PUT request URL expiring because I have that set to 30 minutes and the problem occurs after just a few minutes of sending. 这不是由于Amazon PUT请求URL过期,因为我将其设置为30分钟,并且在发送几分钟后就会出现问题。

The code eventually exceptions out on this line of code: dataStream.Write(byteArray, 0, byteArray.Length); 代码最终在这行代码中异常: dataStream.Write(byteArray, 0, byteArray.Length);

Once again, it works great for smaller files that I am sending to S3. 再次,它适用于我发送给S3的较小文件。 Just not large files. 只是不是大文件。

WebRequest request = WebRequest.Create(PUT_URL_FINAL[0]);
//PUT_URL_FINAL IS THE PRE-SIGNED AMAZON S3 URL THAT I AM SENDING THE FILE TO

request.Timeout = 360000; //6 minutes

request.Method = "PUT";

//result3 is the filename that I am sending                                     
request.ContentType =
    MimeType(GlobalClass.AppDir + Path.DirectorySeparatorChar + "unzip" +
             Path.DirectorySeparatorChar +
             System.Web.HttpUtility.UrlEncode(result3));

byte[] byteArray =
    File.ReadAllBytes(
        GlobalClass.AppDir + Path.DirectorySeparatorChar + "unzip" +
        Path.DirectorySeparatorChar +
        System.Web.HttpUtility.UrlEncode(result3));

request.ContentLength = byteArray.Length;
Stream dataStream = request.GetRequestStream();

// this is the line of code that it eventually quits on.  
// Works fine for small files, not for large ones
dataStream.Write(byteArray, 0, byteArray.Length); 

dataStream.Close();

//This will return "OK" if successful.
WebResponse response = request.GetResponse();
Console.WriteLine("++ HttpWebResponse: " +
                  ((HttpWebResponse)response).StatusDescription);

使用FiddlerWireshark来比较线路工作时的线路(第三方工具)和不工作时(你的代码)......一旦你知道了差异,就可以相应地改变你的代码......

You should set the Timeout property of the WebRequest to a higher value. 您应该将WebRequestTimeout属性设置为更高的值。 It causes the request to time out before it is completed. 它会导致请求在完成之前超时。

I would try writing it in chunks and splitting up the byte array. 我会尝试用块编写它并拆分字节数组。 It may be choking on one large chunk. 它可能在一个大块上窒息。

Something like this: 像这样的东西:

        const int chunkSize = 500;
        for (int i = 0; i < byteArray.Length; i += chunkSize)
        {
            int count = i + chunkSize > byteArray.Length ? byteArray.Length - i : chunkSize;
            dataStream.Write(byteArray, i, count);
        }

May want to double-check that to make sure it wrote everything, I only did very basic testing on it. 可能想仔细检查以确保它写完所有内容,我只对它进行了非常基本的测试。

Just a rough guess, but shouldn't you have: 只是一个粗略的猜测,但你不应该:

request.ContentLength = byteArray.LongLength;

instead of: 代替:

request.ContentLength = byteArray.Length;

Having second thought, 100 MB = 100 * 1024 * 1024 < 2^32, so it probably won't be the problem 有了第二个想法,100 MB = 100 * 1024 * 1024 <2 ^ 32,所以它可能不会是问题

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM