简体   繁体   English

在大小超过3 GB的Amazon S3上上传文件

[英]Upload file on amazon S3 of size more than 3 gb

I am trying to upload files of size more than 3 gb to amazon when i upload file whose size is small they get uploaded happily but When i try to upload big file it does it show me error http 400 当我上传大小较小的文件时,我试图将大小超过3 GB的文件上传到亚马逊,他们高兴地上传了文件,但是当我尝试上传大文件时,它显示了错误http 400

Can Any one tell me where i am going wrong or do i need upload file in chunks. 谁能告诉我我要去哪里错了,还是我需要分块上传文件。 thanks in advance 提前致谢

  public bool UploadFileDicom(HttpPostedFileBase file)
    {
        bool isSuccess = false;
        try
        {
            string bucketName = Convert.ToString(WebConfigurationManager.AppSettings["iqcDicomBucket"]);
            client = new AmazonS3Client(AWS_ACCESS_KEY, AWS_SECRET_KEY, Amazon.RegionEndpoint.USWest2);
            var request = new PutObjectRequest()
            {
                BucketName = bucketName,
                CannedACL = S3CannedACL.PublicRead,//PERMISSION TO FILE PUBLIC ACCESIBLE
                Key = file.FileName,
                InputStream = file.InputStream//SEND THE FILE STREAM

            };
            client.PutObject(request);
            isSuccess = true;

        }
        catch (Exception ex)
        {
            logger.Error(DateTime.Now + "Error in GlobalComman.cs, CreateDocStatus_TraningAndValidation function: " + ex.Message);
            throw ex;

        }
        return isSuccess;
    }

Check out this lib . 签出这个库 It uploads the file in chunks of configurable sizes. 它将以可配置大小的块上传文件。 As your requirement is to get it done in a browser this should work. 因为您的要求是在浏览器中完成此操作,所以它应该可以工作。

First, you're not allowed to upload files of more than 5gb ( 6gb MUST FAIL, but not 2gb file ): 首先,您不允许上传超过5GB的文件( 6GB必须失败,但不能上传2GB文件 ):

But uploading large files can occur various issues, so, to avoid problems with Single Upload , is recommended Multipart Upload 但是上传大文件可能会发生各种问题,因此,为避免Single Upload出现问题,建议分段上传

Upload objects in parts—Using the Multipart upload API you can upload large objects, up to 5 TB. 分段上传对象-使用Multipart上传API,您可以上传最大5 TB的大型对象。

The Multipart Upload API is designed to improve the upload experience for larger objects. 分段上传API旨在改善较大对象的上传体验。 You can upload objects in parts. 您可以分段上传对象。 These object parts can be uploaded independently, in any order, and in parallel. 这些对象部分可以独立,以任何顺序并行上传。 You can use a Multipart Upload for objects from 5 MB to 5 TB in size. 您可以将分段上传用于大小从5 MB到5 TB的对象。 For more information, see Uploading Objects Using Multipart Upload. 有关更多信息,请参见使用分段上传来上传对象。 For more information, see Uploading Objects Using Multipart Upload API. 有关更多信息,请参阅使用分段上传API上载对象。

Check here The following Java code example uploads a file IN PARTSto an Amazon S3 bucket: 在此处检查以下Java代码示例将IN PARTS中的文件上传到Amazon S3存储桶:

public class UploadObjectMultipartUploadUsingHighLevelAPI {

    public static void main(String[] args) throws Exception {
        String existingBucketName = "*** Provide existing bucket name ***";
        String keyName            = "*** Provide object key ***";
        String filePath           = "*** Path to and name of the file to upload ***";  

        TransferManager tm = new TransferManager(new ProfileCredentialsProvider());        
        System.out.println("Hello");
        // TransferManager processes all transfers asynchronously, 
        // so this call will return immediately.
        Upload upload = tm.upload(
                existingBucketName, keyName, new File(filePath));
        System.out.println("Hello2");

        try {
            // Or you can block and wait for the upload to finish
            upload.waitForCompletion();
            System.out.println("Upload complete.");
        } catch (AmazonClientException amazonClientException) {
            System.out.println("Unable to upload file, upload was aborted.");
            amazonClientException.printStackTrace();
        }
    }
}

@JordiCastilla is correct about S3 multipart and the 5GB threshold... but your first problem is a local one: @JordiCastilla关于S3 multipart和5GB阈值是正确的...但是您的第一个问题是本地问题:

maxRequestLength="2147482624" 

Hmmm. 嗯。 That's about 2 GiB. 大约是2 GiB。 So, when you say your controller isn't firing, that suggests your 400 error isn't even from S3 as the question implies. 因此,当您说您的控制器没有点火时,这表明您的400错误甚至不是问题所暗示的S3。 Your local configuration is causing the browser's request to be canceled in flight because it's larger than your local configuration allows. 您的本地配置导致浏览器的请求在运行中被取消,因为它的大小超出了本地配置所允许的范围。

Your first step seems like it would be to increase this configuration value, bearing in mind that you will subsequently also need to transition to multipart uploads to S3 when you cross the 5GB threshold. 您的第一步似乎是要增加此配置值,请记住,当您超过5GB阈值时,随后还需要转换为分段上传到S3。

Remember, also, that a request in progress is also eating up temporary disk space on your server, so having this value even as large as it already is (not to mention setting it even larger) could put you at risk for a denial of service attack, where spurious requests could disable your site by exhausting your temp space. 还要记住,正在进行的请求也正在占用服务器上的临时磁盘空间,因此,将此值设置为已经达到的最大值(更不用说将其设置为更大的值)可能会使您面临拒绝服务的风险。攻击,虚假请求可能会耗尽您的临时空间,从而使您的网站瘫痪。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM