简体   繁体   中英

AWS Lambda function and S3 - change metadata on an object in S3 only if the object changed

I have a function in Lambda that should add Metadata headers to an object on s3 only if the object changed.

 ContentType: 'application/javascript' CacheControl: 'max-age=600' 

But it turns out that Lambda check the bucket around 100 times in a sec and not only if the object changed, and its cost a lot.

Access log on S3:

 b6234e2652b93344f7 aa [02/Mar/2016:11:00:55 +0000] 54.0.0.209 arn:aws:sts::718436:assumed-role/lambda_s3_exec_role/awslambda_642_201609 805 REST.COPY.OBJECT /local.js "PUT /local.js HTTP/1.1" 200 - 234 4404 50 24 "-" "aws-sdk-nodejs/2.2.32 linux/v0.10.36" - b6234ee5f9cf0344f7 aa [02/Mar/2016:11:00:55 +0000] 54.0.0.209 arn:aws:sts::71836:assumed-role/lambda_s3_exec_role/awslambda_642_209 890005 REST.COPY.OBJECT_GET local.js - 200 - - 4404 - - - - - 

Function:

 console.log('Loading function'); var aws = require('aws-sdk'); var s3 = new aws.S3({ apiVersion: '2006-03-01' }); exports.handler = function(event, context) { //console.log('Received event:', JSON.stringify(event, null, 2)); // Get the object from the event and show its content type var bucket = event.Records[0].s3.bucket.name; var key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\\+/g, ' ')); var params = { Bucket: bucket, Key: key, CopySource: encodeURIComponent(bucket+"/"+key), ContentType: 'application/javascript', CacheControl: 'max-age=600', "Metadata":{ }, MetadataDirective: 'REPLACE' }; //s3.getObject(params, function(err, data) { s3.copyObject(params, function(err, data) { if (err) { console.log(err); var message = "Error getting object " + key + " from bucket " + bucket + ". Make sure they exist and your bucket is in the same region as this function."; console.log(message); context.fail(message); } else { console.log('CONTENT TYPE:', data.ContentType); context.succeed(data.ContentType); } }); }; 

What I need to changed in order the function will work only if the object changed in s3 ?

Thanks in advanced !

You created an infinite loop bug for yourself! The Lambda function is triggered when the object is changed, and by changing the metadata and using copyObject you change the object and thus load the Lambda function again. You immediately hit the 100 concurrent requests Lambda limit which is there to make sure you don't have to pay a million euros now because you wrote an infinite loop.

To circumvent this, you need to rethink your architecture. There are multiple options, but the easiest is this one I think:

In your Lambda code, do s3.getObject first and check if the headers you want to change are already there. If so, close the Lambda function. This way you only execute the Lambda function twice per edit. Not 100% ideal, but good enough for practical purposes imo.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM