简体   繁体   中英

Upload file on amazon S3 of size more than 3 gb

I am trying to upload files of size more than 3 gb to amazon when i upload file whose size is small they get uploaded happily but When i try to upload big file it does it show me error http 400

Can Any one tell me where i am going wrong or do i need upload file in chunks. thanks in advance

  public bool UploadFileDicom(HttpPostedFileBase file)
    {
        bool isSuccess = false;
        try
        {
            string bucketName = Convert.ToString(WebConfigurationManager.AppSettings["iqcDicomBucket"]);
            client = new AmazonS3Client(AWS_ACCESS_KEY, AWS_SECRET_KEY, Amazon.RegionEndpoint.USWest2);
            var request = new PutObjectRequest()
            {
                BucketName = bucketName,
                CannedACL = S3CannedACL.PublicRead,//PERMISSION TO FILE PUBLIC ACCESIBLE
                Key = file.FileName,
                InputStream = file.InputStream//SEND THE FILE STREAM

            };
            client.PutObject(request);
            isSuccess = true;

        }
        catch (Exception ex)
        {
            logger.Error(DateTime.Now + "Error in GlobalComman.cs, CreateDocStatus_TraningAndValidation function: " + ex.Message);
            throw ex;

        }
        return isSuccess;
    }

Check out this lib . It uploads the file in chunks of configurable sizes. As your requirement is to get it done in a browser this should work.

First, you're not allowed to upload files of more than 5gb ( 6gb MUST FAIL, but not 2gb file ):

But uploading large files can occur various issues, so, to avoid problems with Single Upload , is recommended Multipart Upload

Upload objects in parts—Using the Multipart upload API you can upload large objects, up to 5 TB.

The Multipart Upload API is designed to improve the upload experience for larger objects. You can upload objects in parts. These object parts can be uploaded independently, in any order, and in parallel. You can use a Multipart Upload for objects from 5 MB to 5 TB in size. For more information, see Uploading Objects Using Multipart Upload. For more information, see Uploading Objects Using Multipart Upload API.

Check here The following Java code example uploads a file IN PARTSto an Amazon S3 bucket:

public class UploadObjectMultipartUploadUsingHighLevelAPI {

    public static void main(String[] args) throws Exception {
        String existingBucketName = "*** Provide existing bucket name ***";
        String keyName            = "*** Provide object key ***";
        String filePath           = "*** Path to and name of the file to upload ***";  

        TransferManager tm = new TransferManager(new ProfileCredentialsProvider());        
        System.out.println("Hello");
        // TransferManager processes all transfers asynchronously, 
        // so this call will return immediately.
        Upload upload = tm.upload(
                existingBucketName, keyName, new File(filePath));
        System.out.println("Hello2");

        try {
            // Or you can block and wait for the upload to finish
            upload.waitForCompletion();
            System.out.println("Upload complete.");
        } catch (AmazonClientException amazonClientException) {
            System.out.println("Unable to upload file, upload was aborted.");
            amazonClientException.printStackTrace();
        }
    }
}

@JordiCastilla is correct about S3 multipart and the 5GB threshold... but your first problem is a local one:

maxRequestLength="2147482624" 

Hmmm. That's about 2 GiB. So, when you say your controller isn't firing, that suggests your 400 error isn't even from S3 as the question implies. Your local configuration is causing the browser's request to be canceled in flight because it's larger than your local configuration allows.

Your first step seems like it would be to increase this configuration value, bearing in mind that you will subsequently also need to transition to multipart uploads to S3 when you cross the 5GB threshold.

Remember, also, that a request in progress is also eating up temporary disk space on your server, so having this value even as large as it already is (not to mention setting it even larger) could put you at risk for a denial of service attack, where spurious requests could disable your site by exhausting your temp space.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM