简体   繁体   中英

How can I upload a file to S3 using a presigned URL without loading it into memory

I'm using presigned AWS S3 urls to upload files from a device with limited memory using Java. However, I occasionally have large files that I need the upload. The problem is that loading these files into memory is causing out of memory exceptions.

I've found that it's possible to stream a file directly to S3 (eg, https://www.example-code.com/java/s3_file_stream_upload.asp ), but I can't find any information about how to do that using a presigned url.

Is there a way to upload large files (under 5GB) to S3 using presigned urls without loading the file into memory first?

This can be achieved using Multipart. You don't event need to store the file locally on your server, this will avoid Mermory Exception. Below code is in Java

public boolean uploadFileDirectlyToS3(MultipartFile multipartFile, String keyName) {
        ObjectMetadata data = new ObjectMetadata();
        try {
            data.setContentType(multipartFile.getContentType());
            data.setContentLength(multipartFile.getSize());
            s3Client.putObject(CORE_BUNDLE.getString("bucket.name"), keyName, multipartFile.getInputStream(), data); 
//KeyName is the specific and unique path where you wanna place your file in s3
            return true;
        } catch (Exception e) {
            logger.error("Error occured while uploading file to s3 having name: "+multipartFile.getOriginalFilename(), e);
            return false;
        }
    }

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM