[英]Lambda trigger dynamic specific path s3 upload
I am trying to create a lambda function that will get triggered once a folder is uploaded to a S3 Bucket.我正在尝试创建一个 lambda function 一旦文件夹上传到 S3 存储桶就会被触发。 But the lambda will perform an operation that will save files back on the same folder, how can I do so without having a self calling function?但是 lambda 将执行将文件保存回同一文件夹的操作,我如何才能在不调用 function 的情况下执行此操作?
I want to upload the following folder structure to the bucket:我想将以下文件夹结构上传到存储桶:
Project_0001/input/inputs.csv项目_0001/输入/输入.csv
The outputs will create and be saved on:输出将创建并保存在:
Project_0001/output/outputs.csv项目_0001/输出/输出.csv
But, my project number will change, so I can't simply assign a static prefix.但是,我的项目编号会改变,所以我不能简单地分配一个 static 前缀。 Is there a way of dynamically change the prefix, something like:有没有办法动态更改前缀,例如:
Project_*/input/项目_*/输入/
From Shubham's comment I drafted my solution using the prefix and sufix.根据 Shubham 的评论,我使用前缀和后缀起草了我的解决方案。
For my case, I stated the prefix being 'Project_' and for the suffix I choose one specific file for the trigger, so my suffix is '/input/myFile.csv'.对于我的情况,我将前缀指定为“Project_”,对于后缀,我为触发器选择了一个特定文件,因此我的后缀是“/input/myFile.csv”。
So every time I upload the structure Project_/input/allmyfiles_with_myFile.csv it triggers the function and then I save my output in the same project folder, under the output folder, thus not triggering the function again.因此,每次我上传结构 Project_/input/allmyfiles_with_myFile.csv 时,它都会触发 function,然后我将 output 保存在同一项目文件夹中的 output 文件夹下,因此不会再次触发 function。
I get project name with the following code我使用以下代码获取项目名称
key = event['Records'][0]['s3']['object']['key']
project_id = key.split("/")[0]
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.