简体   繁体   中英

AWS S3 uploading/downloading huge files with low memory footprint

Suppose we have an app that has very limited memory but has to upload/download huge files to AWS s3.

Question 1 : what is the correct api to use when we need to upload/download directly to FS while having very limited memory? (like 200Mb)

One of the options to upload object to s3 is this

TransferManager.upload(String bucketName, String key, File file)

Question 2 : will TransferManager.upload() put entire file into the memory or it is smart enough to stream content to s3 without filling up the memory?

Question 3 : do we have any api that can do zero copy networking ?

Question 4 : aws offers option to move files from s3 to slower storage if you define the policy, if the file is moved to low frequency access storage do we query it the same way? (my assumption is that s3 will block me for hours to get the file then my download will start) important thing is if this process is hidden for me as a client or i need to figure out where my file is now and use the specific api to get it?

Pardon me for many questions, searched answers for while, found only bits and pieces but no explicit answers.

Q1, Q2: Dig for a multi-part S3 upload, that is what you are looking for.

Q3: Nope, S3 supports only standard and multi-part upload APIs for now.

Q4: No, it is working other way. For you, it will look like the file is stored normally and you will have access to it as soon as you uploaded it (several seconds), but the difference is in the price. It will be much more cheaper for you to store data, but more expensive to retrieve every MB.

Good luck

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM