简体   繁体   中英

Amazon S3 Multipart UploadPartRequest allows only single thread to upload at the same time using asp.net

I am trying to upload video files Amazon S3 using Multipart upload method in asp.net and I traced the upload progress using logs. It uploads 106496 each time and runs only single thread at a time. I did not notice that multiple threads running. Please clarify me on this why it is running single thread and it's taking long time to upload even for 20Mb file it's taking almost 2 minutes.

Here is my code, which uses UploadPartRequest .

private void UploadFileOnAmazon(string subUrl, string filename, Stream audioStream, string extension)
    {
        client = new AmazonS3Client(accessKey, secretKey, Amazon.RegionEndpoint.USEast1);

        // List to store upload part responses.
        List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>();


        // 1. Initialize.
        InitiateMultipartUploadRequest initiateRequest = new InitiateMultipartUploadRequest
        {
            BucketName = bucketName,
            Key = subUrl + filename
        };

        InitiateMultipartUploadResponse initResponse =
            client.InitiateMultipartUpload(initiateRequest);

        // 2. Upload Parts.
        //long contentLength = new FileInfo(filePath).Length;
        long contentLength = audioStream.Length;
        long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB


        try
        {
            long filePosition = 0;
            for (int i = 1; filePosition < contentLength; i++)
            {

                UploadPartRequest uploadRequest = new UploadPartRequest
                {
                    BucketName = bucketName,
                    Key = subUrl + filename,
                    UploadId = initResponse.UploadId,
                    PartNumber = i,
                    PartSize = partSize,
                    FilePosition = filePosition,
                    InputStream = audioStream
                    //FilePath = filePath
                };

                // Upload part and add response to our list.
                uploadRequest.StreamTransferProgress += new EventHandler<StreamTransferProgressArgs>(UploadPartProgressEventCallback);                 
                uploadResponses.Add(client.UploadPart(uploadRequest));

                filePosition += partSize;
            }

            logger.Info("Done");


            // Step 3: complete.
            CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest
            {
                BucketName = bucketName,
                Key = subUrl + filename,
                UploadId = initResponse.UploadId,
                //PartETags = new List<PartETag>(uploadResponses)

            };
            completeRequest.AddPartETags(uploadResponses);
            CompleteMultipartUploadResponse completeUploadResponse =
                client.CompleteMultipartUpload(completeRequest);

        }
        catch (Exception exception)
        {
            Console.WriteLine("Exception occurred: {0}", exception.Message);
            AbortMultipartUploadRequest abortMPURequest = new AbortMultipartUploadRequest
            {
                BucketName = bucketName,
                Key = subUrl + filename,
                UploadId = initResponse.UploadId
            };
            client.AbortMultipartUpload(abortMPURequest);
        }
    }
    public static void UploadPartProgressEventCallback(object sender, StreamTransferProgressArgs e)
    {
        // Process event. 
        logger.DebugFormat("{0}/{1}", e.TransferredBytes, e.TotalBytes);
    }       

Is there anything wrong with my code or how to make threads run simultaneously to speed up upload?

Rather than managing the Multipart Upload yourself, try using the TransferUtility that does all the hard work for you!

See: Using the High-Level .NET API for Multipart Upload

The AmazonS3Client internally uses an AmazonS3Config instance to know the buffer size used for transfers ( ref 1 ). This AmazonS3Config ( ref 2 ) has a property named BufferSize whose default value is retrieved from a constant in AWSSDKUtils ( ref 3 ) - which in the current SDK version defaults to 8192 bytes - quite small value IMHO.

You may use a custom instance of AmazonS3Config with an arbitrary BufferSize value. To build an AmazonS3Client instance that respects your custom configs, you have to pass the custom config to the client constructor. Example:

// Create credentials.
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
// Create custom config.
AmazonS3Config config = new AmazonS3Config
{
    RegionEndpoint = Amazon.RegionEndpoint.USEast1,
    BufferSize = 512 * 1024, // 512 KiB
};
// Pass credentials + custom config to the client.
AmazonS3Client client = new AmazonS3Client(credentials, config);

// They uploaded happily ever after.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM