[英]Pass arguments from S3 trigger to Lambda function
Using AWS Cloud Services, I am using an S3 trigger to monitor a bucket and invoke a Lambda function.使用 AWS 云服务,我使用 S3 触发器来监控存储桶并调用 Lambda function。 This function then picks up the S3 object to populate a DynamoDB table.
然后,此 function 获取 S3 object 以填充 DynamoDB 表。
The problem is that I now need to monitor multiple directories for changes and each directory has meta data (not available in the object) that needs to be passed to the DynamoDB.问题是我现在需要监视多个目录的更改,并且每个目录都有需要传递给 DynamoDB 的元数据(对象中不可用)。 I do not know of a way to pass this meta information from the trigger to the lambda.
我不知道如何将此元信息从触发器传递到 lambda。 I currently have the Lambda duplicated for each directory with the meta information saved as environment variables for each Lambda.
我目前为每个目录复制了 Lambda,并将元信息保存为每个 Lambda 的环境变量。 This works, but feels like a terrible hack.
这行得通,但感觉就像一个可怕的黑客。
How can I go about using a single Lambda to monitor multiple directories passing the meta arguments from the trigger to the Lambda?我如何 go 关于使用单个 Lambda 来监视多个目录,将元 arguments 从触发器传递到 Z04A7DA3C3154CAD85DA1EEBBB9?
Sadly, you can't add extra information to the S3 notification records.遗憾的是,您无法向 S3 通知记录添加额外信息。 But, if the folders are part of the same bucket, then having one lambda could be enough in my opinion.
但是,如果这些文件夹是同一个存储桶的一部分,那么在我看来,拥有一个 lambda 就足够了。
This is based on the fact that you could differentiate between different directories based on the prefixes of the S3 objects.这是基于您可以根据 S3 对象的前缀区分不同目录的事实。
For example if you upload the following to files to the bucket:例如,如果您将以下内容上传到存储桶中的文件:
dir1/file1.csv
dir2/file3.txt
your lambda would be triggered for each of them.您的 lambda 将为每个人触发。 In the lambda you could use basic
if-else
matching to check if your objects prefixes are dir1
or dir2
.在 lambda 中,您可以使用基本
if-else
匹配来检查您的对象前缀是dir1
还是dir2
。 Based on this you could choose different metadata to be written to the dynamodb.基于此,您可以选择不同的元数据写入 dynamodb。
Very roughly, based on two folders, in your lambda function you could have (pseudo-code):非常粗略地,基于两个文件夹,在您的 lambda function 中您可以拥有(伪代码):
if object_key.beinswith('dir1'):
metadata = {some_metatadata_for_dir1}
elif object_key.beinswith('dir2'):
metadata = {some_metatadata_for_dir2}
dynamodb.put_item({object_key + metadata})
You are basically doing this anyway, but passing the metadata though env variables to different lambda functions.无论如何,您基本上都是这样做的,但是通过 env 变量将元数据传递给不同的 lambda 函数。 Obviously, if you have many folders to monitor you can store the metadata outside of the lambda , eg in other dynamodb table, or parameter store if the metadata would change often.
显然,如果您有许多文件夹要监视,您可以将元数据存储在 lambda 之外,例如在其他 dynamodb 表中,或者如果元数据经常更改,则存储在参数存储中。
Hope this helps.希望这可以帮助。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.