简体   繁体   English

通过单个 lambda function 将s3中托管的文件内容解压到多个cloudfront url

[英]Unzip file content hosted in s3 to multiple cloudfront url through a single lambda function

Is there any specific way to unzip single file contents from s3 to multiple cloudfront urls by triggering lambda once.是否有任何特定的方法可以通过一次触发 lambda 将单个文件内容从 s3 解压缩到多个云端 url。

Lets say in there is a zip file contains multiple jpg/ png files already uploaded to s3.假设有一个 zip 文件包含多个已上传到 s3 的 jpg/png 文件。 Intention is to run lambda function only once to unzip all its file content and make them available in multiple cloudfront urls.目的是仅运行一次 lambda function 以解压缩其所有文件内容并使它们在多个云端 url 中可用。

in s3 bucket在 s3 桶中

archive.zip
   a.jpg
   b.jpg
   c.jpg

through cloudfront通过云端

https://1232.cloudfront.net/a.jpg
https://1232.cloudfront.net/b.jpg
https://1232.cloudfront.net/c.jpg

I am looking for a solution such that lambda function trigger function calls whenever a s3 upload happens and make all files available in the zip through cloudfront multiple urls.我正在寻找一种解决方案,这样 lambda function 就会在 s3 上传发生时触发 function 调用,并通过云端多个 url 使 zip 中的所有文件可用。

Hello Prathap Parameswar,您好 Prathap Parameswar,

I think you can resolve your problem like this:我认为你可以这样解决你的问题:

  • First you need to exact your zip file首先你需要确定你的 zip 文件
  • Seconds you upload them again to S3.几秒钟后,您将它们再次上传到 S3。

This is lambda python function:这是 lambda python function:

import json
import boto3
from io import BytesIO
import zipfile

def lambda_handler(event, context):
    # TODO implement
    
    s3_resource = boto3.resource('s3')
    source_bucket = 'upload-zip-folder'
    target_bucket = 'upload-extracted-folder'

    my_bucket = s3_resource.Bucket(source_bucket)

    for file in my_bucket.objects.all():
        if(str(file.key).endswith('.zip')):
            zip_obj = s3_resource.Object(bucket_name=source_bucket, key=file.key)
            buffer = BytesIO(zip_obj.get()["Body"].read())
            
            z = zipfile.ZipFile(buffer)
            for filename in z.namelist():
                file_info = z.getinfo(filename)
                try:
                    response = s3_resource.meta.client.upload_fileobj(
                        z.open(filename),
                        Bucket=target_bucket,
                        Key=f'{filename}'
                    )
                except Exception as e:
                    print(e)
        else:
            print(file.key+ ' is not a zip file.')

Hope this can help you希望这可以帮到你

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何通过 S3 和 Lambda@Edge 从部署在 Cloudfront 上的 Nextjs API 抓取 Request URL - How to grab Request URL from Nextjs API deployed on Cloudfront through S3 and Lambda@Edge 无法在单个 Cloudfront Distribution 上为多个 S3 存储桶提供服务 - Not able to server multiple S3 buckets on a single Cloudfront Distribution 使用 Cloudfront 和 AWS Lambda 对 S3 托管网站进行身份验证时出现 503 错误 - 503 error in authentication using Cloudfront and AWS Lambda for S3 hosted website 使用 Lambda 通过没有云端的 ALB 为 S3 托管的 React 应用程序提供服务时 ALB 超时 - ALB Timeout when Using Lambda to serve S3-hosted React app through an ALB without cloudfront 在 AWS S3 和 CloudFront 中托管的静态网站 - Static website hosted in AWS S3 and CloudFront 如何在云端 lambda@Edge Function 中访问 S3 存储桶 object? - How access S3 bucket object in cloudfront lambda@Edge Function? 使用 lambda function 生成 S3 预签名 URL 7 天有效期 - Generate S3 presigned URL using lambda function for 7 days validity 使用 Lambda 将大文件的内容从 S3 写入 Dynamo - Writing content of a large file to Dynamo from S3 with Lambda 如何在lambda function中添加s3触发器function读取文件 - How to add s3 trigger function in the lambda function to read the file 在 Amazon S3 上解压 ZIP 文件 - Unzip ZIP file on Amazon S3
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM