[英]How to restore objects from amazon glacier to standard tier permanently in a s3 bucket using python?
I am trying to restore some objects in a s3 bucket from glacier to standard tier permanently below is my code我正在尝试将 s3 存储桶中的一些对象从冰川恢复到标准层,下面是我的代码
def restoreObject(latest_object):
#s3_resource=boto3.resource('s3')
s3 = boto3.client('s3')
my_bucket = s3.Bucket('bucket_name')
for bucket_object in my_bucket.objects.all():
object_key=bucket_object.key
if(bucket_object.storage_class == "Glacier_Deep_Archive"):
if object_key in latest_object:
my_bucket.restore_object(Bucket="bucket_name",Key=object_key,RestoreRequest={'Days': 30,'Tier': 'Standard'})
But this restores the bundle for a particular time only (30 days in my case) Is there a way to restore bundles permanently from Glacier_Deep_Archive to standard tier?但这只会在特定时间恢复捆绑包(在我的情况下为 30 天)有没有办法将捆绑包从 Glacier_Deep_Archive 永久恢复到标准层?
To permanently change the Storage Class of an object in Amazon S3, either:要永久更改Amazon S3 中 object 的存储 Class,可以:
CopyObject
with the same Key (overwriting itself) while specifying the new Storage Class, orCopyObject
写入代码,或者However, Lifecycle policies do not seem to support going from Glacier to Standard tiers.但是,生命周期策略似乎不支持从 Glacier 升级到标准层。
Therefore, you would need to copy the object to itself to change the storage class:因此,您需要将 object 复制到自身以更改存储 class:
copy_source = {'Bucket': 'bucket_name', 'Key': object_key}
my_bucket.copy(copy_source, object_key, ExtraArgs = {'StorageClass': 'STANDARD','MetadataDirective': 'COPY'})
Here's a nice example: aws s3 bucket change storage class by object size · GitHub这是一个很好的例子: aws s3 bucket change storage class by object size · GitHub
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.