简体   繁体   English

AWS Lambda 尝试将文件从 S3 存储桶复制到另一个 S3 存储桶时出现无效存储桶名称错误

[英]Invalid bucket name error when AWS Lambda tries to copy files from an S3 bucket to another S3 bucket

I'm new to python. I have an event triggered AWS Lambda function that copies files from an S3 bucket to another S3 bucket.我是 python 的新手。我有一个事件触发了 AWS Lambda function,它将文件从 S3 存储桶复制到另一个 S3 存储桶。 The destination S3 path where I want to copy the file is: "dest_bucket/folder1/test".我要复制文件的目标 S3 路径是:“dest_bucket/folder1/test”。 It gives me this error when I try to run it:当我尝试运行它时它给了我这个错误:

Invalid bucket name "dest_bucket/folder1/test": Bucket name must match the regex "^[a-zA-Z0-9.-_]{1,255}$" or be an ARN matching the regex "^arn:(aws). :(s3|s3-object-lambda):[az-0-9] :[0-9]{12}:accesspoint[/:][a-zA-Z0-9-.]{1,63}$|^arn:(aws).*:s3-outposts:[az-0-9]+:[0-9]{12}:outpost[/:][a-zA-Z0-9-]{1,63}[/:]accesspoint[/:][a-zA-Z0-9-]{1,63}$"存储桶名称“dest_bucket/folder1/test”无效:存储桶名称必须与正则表达式“^[a-zA-Z0-9.-_]{1,255}$”匹配,或者是与正则表达式“^arn:(aws)”匹配的 ARN . :(s3|s3-object-lambda):[az-0-9] :[0-9]{12}:accesspoint[/:][a-zA-Z0-9-.]{1,63} $|^arn:(aws).*:s3-outposts:[az-0-9]+:[0-9]{12}:outpost[/:][a-zA-Z0-9-]{1 ,63}[/:]接入点[/:][a-zA-Z0-9-]{1,63}$"

The source bucket does not have any folder structure.源存储桶没有任何文件夹结构。 The destination bucket has a folder structure and the files need to be copied under "dest_bucket/folder1/test".目标存储桶具有文件夹结构,需要将文件复制到“dest_bucket/folder1/test”下。 The error is occurring here in the lambda function: "destination_bucket_name = 'dest_bucket/folder1/test".错误发生在 lambda function 中:“destination_bucket_name = 'dest_bucket/folder1/test”。 Because, if I simply write the destination bucket name without the slashes, it works?因为,如果我简单地写下不带斜杠的目标存储桶名称,它会起作用吗? Any idea how i should write this?知道我应该怎么写这个吗?

import json
import boto3
import os
import uuid

def lambda_handler(event, context):
    try:
        client = boto3.client('sts')
        response = client.assume_role(RoleArn='arn:aws:iam::xxx:role/xxx_lambda_role',RoleSessionName="lambda")
        session = boto3.Session(aws_access_key_id=response['Credentials']['AccessKeyId'],aws_secret_access_key=response['Credentials']['SecretAccessKey'],aws_session_token=response['Credentials']['SessionToken'])
        print(session)
        print("role has been assumed")
        
        s3_client = boto3.client("s3", aws_access_key_id=response['Credentials']['AccessKeyId'],aws_secret_access_key=response['Credentials']['SecretAccessKey'],aws_session_token=response['Credentials']['SessionToken'])
        #s3_client = boto3.client("s3")
        
        #base = read from parameter store
        #table_partion = read from file
        destination_bucket_name = 'dest_bucket/folder1/test'

        # event contains all information about uploaded object
        print("Event :", event)

        # Source bucket
        source_bucket_name = event['Records'][0]['s3']['bucket']['name']
        print(source_bucket_name)

        # Filename of object (with path)
        file_key_name = event['Records'][0]['s3']['object']['key']
        #file_key_name = 'empty_test.txt'
        print(file_key_name)

        # Copy Source Object
        copy_source_object = {'Bucket': source_bucket_name, 'Key': file_key_name}
        print(copy_source_object)

        # S3 copy object operation
        s3_client.copy_object(CopySource=copy_source_object, Bucket=destination_bucket_name, Key=file_key_name)


        return {
            'statusCode': 200,
            'body': json.dumps('S3 events Lambda!')
        }

    except Exception as e:
        print(e)
        raise e

From the docs:从文档:

The bucket name can be between 3 and 63 characters long, and can contain only lower-case characters, numbers, periods, and dashes.存储桶名称的长度可以介于 3 到 63 个字符之间,并且只能包含小写字符、数字、句点和破折号。

Each label in the bucket name must start with a lowercase letter or number.存储桶名称中的每个 label 必须以小写字母或数字开头。

The bucket name cannot contain underscores, end with a dash, have consecutive periods, or use dashes adjacent to periods.存储桶名称不能包含下划线、以破折号结尾、有连续的句点或在句号旁边使用破折号。

The bucket name cannot be formatted as an IP address (198.51.100.24).存储桶名称不能格式化为 IP 地址 (198.51.100.24)。

https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-s3-bucket-naming-requirements.html https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-s3-bucket-naming-requirements.html

Make sure you just use the bucket name for bucket name:) as for the "path", it's really a fake thing in S3 - the only real thing is object key.确保您只使用存储桶名称作为存储桶名称:) 至于“路径”,它在 S3 中真的是一个假的东西——唯一真实的东西是 object 密钥。 Slashes are just characters in that name, they have no special meaning.斜杠只是该名称中的字符,它们没有特殊含义。

You can use:您可以使用:

destination_bucket = 'dest_bucket'
destination_path = 'folder1/test/'

for record in event['Records']:
    source_bucket = record['s3']['bucket']['name']
    source_key = record['s3']['object']['key']

    copy_source_object = {'Bucket': source_bucket, 'Key': source_key}

    destination_key = destination_path + source_key
    s3_client.copy_object(CopySource=copy_source_object, Bucket=destination_bucket, Key=destination_key)

This will loop through all incoming Records within the event.这将遍历事件中的所有传入记录。

It will then create an object with the name folder1/test/ + the name (Key) of the source object.然后它将创建一个 object,名称为folder1/test/ + 源 object 的名称(Key)。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 将 json 文件从一个 s3 存储桶复制到另一个 s3 存储桶时,无法识别 Json 文件? - Json file is not recognising when copy json files from one s3 bucket to another s3 bucket? 将文件从一个 AWS 帐户的 S3 存储桶复制到另一个 AWS 帐户的 S3 存储桶 + 使用 NodeJS - Copy files from one AWS account's S3 bucket to another AWS account's S3 bucket + using NodeJS 将文件从一个帐户中的 AWS S3 存储桶复制到 terraform/python 中另一个帐户中的存储桶 - copy files from AWS S3 bucket in one account to bucket in another account in terraform/python 将文件复制并合并到另一个 S3 存储桶 - Copy and Merge files to another S3 bucket 是否可以在不使用存储桶策略的情况下将 s3 存储桶内容从一个存储桶复制到另一个帐户 s3 存储桶? - is it possible to copy s3 bucket content from one bucket to another account s3 bucket without using bucket policy? 根据将文件从一个 S3 存储桶复制到另一个存储桶的清单文件触发 AWS Lambda function - Trigger AWS Lambda function based on a manifest file which copies files from one S3 bucket to another 将 Lambda 重定向或复制到 S3 存储桶 - Redirect or copy Lambda out to S3 bucket 使用 Lambda 将文件从一个 S3 存储桶复制到另一个 S3 存储桶 - 时间限制? - File Copy from one S3 bucket to other S3 bucket using Lambda - timing constraint? AWS Lambda S3 存储桶中的代码未更新 - AWS Lambda Code in S3 Bucket not updating AWS s3 试图修复错误 s3.meta.client.head_bucket(Bucket=bucket_name) - AWS s3 trying to fix error s3.meta.client.head_bucket(Bucket=bucket_name)
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM