简体   繁体   中英

Uploading large size (more than 1 GB) file on Amazon S3 using Java: Large files temporary consuming lot of space in server

I am trying to upload large files (more than 1 GB) on amazon S3 using Java

So, the file being uploaded will be temporarily uploaded on the server in chunks and it will be uploaded on S3 in chunks. Now the problem is that this method puts a huge load on the server since this consumes server space temporarily. If multiple users are trying to upload large files at the same time then it will create an issue.

Is there any way of directly uploaded files from the user's system to amazon S3 in chunks without storing the file on server temporarily?

If upload the files via frontend directly then there a major risk of keys getting exposed.

You should leverage the upload directly from client with Signed URL There are plenty documentation for this

AWS SDK Presigned URL + Multipart upload

The presigned URLs are useful if you want your user/customer to be able to upload a specific object to your bucket , but you don't require them to have AWS security credentials or permissions.

You could also be interested in limiting the size that user is able to upload

Limit Size Of Objects While Uploading To Amazon S3 Using Pre-Signed URL

Think about signed URL as a temporary credential for client to access a specific S3 location. These credential expire in a short time so there is less security concern, but do remember to restrict the access of the signed URLs appropriately

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM