简体   繁体   中英

Download big file by parts from Amazon S3

I would like to download big file from Amazon S3 into RAM. File is bigger then RAM size. Seems, I need to load it by parts. Each part would be return in endpoint. Also I can not use hard drive, to store downloaded file there. I have InputStream object and I am trying to load object like below:

    inputStream.skip(totalBytes);
    long downloadedBytesCount = 0;
    ByteArrayOutputStream result = new ByteArrayOutputStream();
    byte[] buffer = new byte[1024];
    int length;
    do {
        length = inputStream.read(buffer);
        result.write(buffer, 0, length);
        downloadedBytesCount += length;
    }
    while (downloadedBytesCount <= partOfFileSize && (length != -1));
    totalBytes += downloadedBytesCount;

but that code contains problems: each new request will start download file from the begin,so last request for downloading (for example 20 MB) will download all the file (for example 1 GB). So, method skip(long) doesn't work as I expected.

How can I download file from inputStream by parts? Any suggestions?

The standard S3 library can transfer whatever parts of the file you want:

(taken from the AWS docs )

GetObjectRequest rangeObjectRequest = new GetObjectRequest(
        bucketName, key);
rangeObjectRequest.setRange(0, 10); // retrieve 1st 11 bytes.
S3Object objectPortion = s3Client.getObject(rangeObjectRequest);

InputStream objectData = objectPortion.getObjectContent();

In your program you could, for example, read 1000 bytes at a time by moving the range.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM