简体   繁体   中英

AWS Amazon S3 Java SDK - Refresh credentials / token when expired while uploading large file

I'm trying to upload a large file to a server which uses a token and the token expires after 10 minutes, so if I upload a small file it will work therefore if the file is big than I will get some problems and will be trying to upload for ever while the access is denied

So I need refresh the token in the BasicAWSCredentials which is than used for the AWSStaticCredentialsProvider therefore I'm not sure how can i do it, please help =)

Worth to mention that we use a local server (not amazon cloud) with provides the token and for convenience we use amazon's code.

here is my code:

public void uploadMultipart(File file) throws Exception {
    //this method will give you a initial token for a given user, 
    //than calculates when a new token is needed and will refresh it just when necessary

    String token = getUsetToken();
    String existingBucketName = myTenant.toLowerCase() + ".package.upload";
    String endPoint = urlAPI + "s3/buckets/";
    String strSize = FileUtils.byteCountToDisplaySize(FileUtils.sizeOf(file));
    System.out.println("File size: " + strSize);

    AwsClientBuilder.EndpointConfiguration endpointConfiguration = new AwsClientBuilder.EndpointConfiguration(endPoint, null);//note: Region has to be null
    //AWSCredentialsProvider        
    BasicAWSCredentials sessionCredentials = new BasicAWSCredentials(token, "NOT_USED");//secretKey should be set to NOT_USED

    AmazonS3 s3 = AmazonS3ClientBuilder
            .standard()
            .withCredentials(new AWSStaticCredentialsProvider(sessionCredentials))
            .withEndpointConfiguration(endpointConfiguration)
            .enablePathStyleAccess()
            .build();

    int maxUploadThreads = 5;
    TransferManager tm = TransferManagerBuilder
            .standard()
            .withS3Client(s3)
            .withMultipartUploadThreshold((long) (5 * 1024 * 1024))
            .withExecutorFactory(() -> Executors.newFixedThreadPool(maxUploadThreads))
            .build();

    PutObjectRequest request = new PutObjectRequest(existingBucketName, file.getName(), file);
    //request.putCustomRequestHeader("Access-Token", token);
    ProgressListener progressListener = progressEvent -> System.out.println("Transferred bytes: " + progressEvent.getBytesTransferred());
    request.setGeneralProgressListener(progressListener);
    Upload upload = tm.upload(request);

    LocalDateTime uploadStartedAt = LocalDateTime.now();
    log.info("Starting upload at: " + uploadStartedAt);

    try {
        upload.waitForCompletion();
        //upload.waitForUploadResult();
        log.info("Upload completed. " + strSize);

    } catch (Exception e) {//AmazonClientException
        log.error("Error occurred while uploading file - " + strSize);
        e.printStackTrace();
    }
}

Solution found !

I found a way to get this working and for to be honest I quite happy about the result, I've done so many tests with big files (50gd.zip) and in every scenario worked very well

My solution is, remove the line: BasicAWSCredentials sessionCredentials = new BasicAWSCredentials(token, "NOT_USED");

AWSCredentials is a interface so we can override it with something dynamic, the the logic of when the token is expired and needs a new fresh token is held inside the getToken() method meaning you can call every time with no harm

AWSCredentials sessionCredentials = new AWSCredentials() {
    @Override
    public String getAWSAccessKeyId() {
        try {
            return getToken(); //getToken() method return a string 
        } catch (Exception e) {
            return null;
        }
    }

    @Override
    public String getAWSSecretKey() {
        return "NOT_USED";
    }
};

When uploading a file (or parts of a multi-part file), the credentials that you use must last long enough for the upload to complete. You CANNOT refresh the credentials as there is no method to update AWS S3 that you are using new credentials for an already signed request.

You could break the upload into smaller files that upload quicker. Then only upload X parts. Refresh your credentials and upload Y parts. Repeat until all parts are uploaded. Then you will need to finish by combining the parts (which is a separate command). This is not a perfect solution as transfer speeds cannot be accurately controlled AND this means that you will have to write your own upload code (which is not hard).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM