[英]How to read a csv file from s3 bucket using lambda function and boto3?
I have s3 bucket and I have set a lambda function which will display contents of csv file when a csv file is uploaded to s3 bucket.S3 bucket is already set as a trigger for my lambda function.我有 s3 存储桶,并且设置了一个 lambda 函数,该函数将在将 csv 文件上传到 s3 存储桶时显示 csv 文件的内容。S3 存储桶已设置为我的 lambda 函数的触发器。 Can you please suggest and advise?
你能建议和建议吗?
An AWS Lambda function is code that you write. AWS Lambda 函数是您编写的代码。 You can make it do anything you wish.
你可以让它做任何你想做的事情。
For your first scenario of displaying a CSV file in CloudWatch Logs, the Lambda function should:对于在 CloudWatch Logs 中显示 CSV 文件的第一个场景,Lambda 函数应该:
event
passed to the Lambda functionevent
检索存储桶和对象的名称/tmp/
directory/tmp/
目录print()
the information that you wish to appear in CloudWatch Logsprint()
您希望在 CloudWatch Logs 中显示的信息 For your second question of "adding an extra column", the Lambda function should:对于“添加额外列”的第二个问题,Lambda 函数应该:
event
passed to the Lambda functionevent
检索存储桶和对象的名称/tmp/
directory/tmp/
目录 The code would look something like:代码如下所示:
import urllib
import boto3
# Connect to S3 and DynamoDB
s3_client = boto3.client('s3')
def lambda_handler(event, context):
# Get the bucket and object key from the Event
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'])
localFilename = '/tmp/file.txt'
# Download the file from S3 to the local filesystem
s3_client.download_file(bucket, key, localFilename)
# Do stuff here with the local file (your code here!)
pass
# Upload modified file
s3_client.upload_file(localFilename, bucket, key)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.