简体   繁体   English

在 Python 中使用 CDK 进行 S3 存储桶复制

[英]S3 Bucket replication using CDK in Python

I am trying to do Cross region replication using Python in CDK.我正在尝试在 CDK 中使用 Python 进行跨区域复制。 I have enabled versioning on both bucket and added policy to replicate object on destination bucket.我在两个存储桶上启用了版本控制,并添加了在目标存储桶上复制 object 的策略。 I Want to add "replication rule configuration" to source bucket,Have got process to do using yaml in cloudformation template.我想在源存储桶中添加“复制规则配置”,在 cloudformation 模板中使用 yaml 需要处理。

But i want to implement same using Python.但我想使用 Python 实现相同的功能。 Can anyone please suggest something for this.谁能为此提出一些建议。 Thanks in Advance!提前致谢!

I had a use case where I had to enable bucket replication for my bucket with multiple destination buckets.我有一个用例,我必须为具有多个目标存储桶的存储桶启用存储桶复制。

I tried to replicate the policy defined here - https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/730#issuecomment-753692737我尝试复制此处定义的策略 - https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/730#issuecomment-753692737

Here is a snippet of my code这是我的代码片段

    my_dest_buckets = ["bucket-1", "bucket-2"]
    s3.CfnBucket(
        self,
        "my-id",
        bucket_name="my-bucket",
        public_access_block_configuration=s3.BlockPublicAccess.BLOCK_ALL,
        bucket_encryption=s3.CfnBucket.BucketEncryptionProperty(
            server_side_encryption_configuration=[
                s3.CfnBucket.ServerSideEncryptionRuleProperty(
                    server_side_encryption_by_default=s3.CfnBucket.ServerSideEncryptionByDefaultProperty(
                        sse_algorithm="AES256"
                    )
                )
            ]
        ),
        ownership_controls=s3.CfnBucket.OwnershipControlsProperty(
            rules=[
                s3.CfnBucket.OwnershipControlsRuleProperty(
                    object_ownership="BucketOwnerPreferred"
                )
            ]
        ),
        versioning_configuration=s3.CfnBucket.VersioningConfigurationProperty(
            status="Enabled"
        ),
        replication_configuration=s3.CfnBucket.ReplicationConfigurationProperty(
            
            role=f"arn:aws:iam::{self.account}:role/my-role",
            rules=[
                # Creating rules with destination buckets other than bucket in main region
                s3.CfnBucket.ReplicationRuleProperty(
                    id=f"rule-{bucket}",
                    destination=s3.CfnBucket.ReplicationDestinationProperty(
                        bucket=f"arn:aws:s3:::{bucket}",
                    ),
                    delete_marker_replication=s3.CfnBucket.DeleteMarkerReplicationProperty(
                        status="Disabled"
                    ),
                    status="Enabled",
                    priority=count,
                    filter=s3.CfnBucket.ReplicationRuleFilterProperty(prefix="")
                    # This is an empty filter which we have added so that AWS
                    # uses latest schema (V2). By default it used old schema which allows only
                    # one destination bucket.
                )
                # For all the dest buckets
                for count, bucket in enumerate(
                    [bucket for bucket in my_dest_buckets]
                )
            ],
        ),
    )

I'm not sure if this is helpfull at all, but I was bound to the Bucket Class in Java (and not CfnBucket ) and therefore needed a little workaround.我不确定这是否有帮助,但我被绑定到 Java 中的Bucket Class (而不是CfnBucket ),因此需要一些解决方法。

final Bucket bucket = Bucket.Builder.create(this, bucketName)
                .bucketName(bucketName)
                .publicReadAccess(live)
                ...

CfnBucket.ReplicationConfigurationProperty replicationConfigurationProperty = CfnBucket.ReplicationConfigurationProperty.builder()
                .role(replicationRole.getRoleArn())
                .rules(...)
                ...

CfnBucket cfnBucket = (CfnBucket)bucket.getNode().getDefaultChild();
cfnBucket.setReplicationConfiguration(replicationConfigurationProperty);

Current cdk "S3Bucket" construct do not has direct replication method exposed.当前的 cdk "S3Bucket" 构造没有暴露直接复制方法。 Its in AWS's feature list.它在 AWS 的功能列表中。 But you can do with using CfnS3Bucket class.但是您可以使用 CfnS3Bucket class。 Here you need to create the two stack one in primary region and secondary region, which will create the two buckets, one in one region and second in another region.在这里,您需要在主要区域和次要区域中创建两个堆栈,这将创建两个存储桶,一个在一个区域中,另一个在另一个区域中。 And using Cfn constructs you can easily achieve the replication.并且使用 Cfn 构造,您可以轻松实现复制。

Sample repo for your reference: https://github.com/techcoderunner/s3-bucket-cross-region-replication-cdk示例 repo 供您参考: https://github.com/techcoderunner/s3-bucket-cross-region-replication-cdk

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 python cdk 在 s3 存储桶上创建 lambda 触发器 - python cdk to create lambda trigger on s3 bucket 如何使用 python 中的 aws-cdk 设置 Amazon S3 通知以在您的存储桶中发生某些事件时接收通知? - How do I set up Amazon S3 notification to receive notifications when certain events happen in your bucket using aws-cdk in python? Cloudfront 在没有公共访问的情况下为 S3 存储桶源提供通过 AWS CDK Python 创建的访问被拒绝的响应 - Cloudfront give Access denied response created through AWS CDK Python for S3 bucket origin without public Access 使用python访问Amazon S3存储桶子文件夹 - Access a amazon s3 bucket subfolder using python 使用Python将MongoDB输出结果导入S3存储桶 - Import MongoDB output result into S3 bucket using Python 使用 Python 和 Boto3 列出 S3 存储桶的目录内容? - List directory contents of an S3 bucket using Python and Boto3? 如何使用 boto 和 python 从存储桶中删除 s3 版本 - How to delete a s3 version from a bucket using boto and python 使用 flask Python 将文件上传到 Amazon S3 存储桶 - Uploading files to an Amazon S3 bucket using flask Python 使用 python 在 AWS S3 存储桶中搜索特定文件 - Search specific file in AWS S3 bucket using python 如何使用Boto使用python监视AWS S3存储桶? - How to monitor a AWS S3 bucket with python using boto?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM