简体   繁体   中英

AWS S3 automated file upload

I wanted to automate the process of copying files between two S3 buckets.

scenario: whenever the same files are getting uploaded to S3(first bucket) the older versions of the file should be moved to another S3 bucket(second S3) and only the newer version file should reside in the first bucket. Please suggest to me the process of doing this

I don't know if there is a direct way, but I want to use shell scripts to achieve:

aws s3 cp s3://newfile/ s3://backupfile --recursive aws s3 cp /Local/file s3://newfile/ --recursive

When an object is uploaded to Amazon S3 with the same Key (filename) as an existing object, the new object will overwrite the existing object .

This can be avoided by activating Versioning on the bucket, which will retain all versions of an object. So, if a new object is uploaded with the same Key, then the old object becomes a 'previous version' and is still accessible in S3. (You will pay for the storage of all versions.)

If your requirements are to preserve previous versions of objects, then this should be sufficient for your need , without having to copy the older version to a different bucket.

If you really wanted to do as you ask, then you would need:

  • Versioning turned on (to preserve older versions)
  • An AWS Lambda function that is triggered by the upload that will copy the old version to a different bucket and optionally delete the 'old version'
  • A plan for what to do when there is another upload of the same object -- should it copy it to the 'other bucket' and overwrite the older version that is already there? It might need Versioning too!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM