[英]How could I use aws lambda to write file to s3 (python)?
I have tried to use lambda function to write a file to S3, then test shows "succeeded" ,but nothing appeared in my S3 bucket.我尝试使用 lambda 函数将文件写入 S3,然后测试显示“成功”,但我的 S3 存储桶中没有出现任何内容。 What happened?
发生了什么? Does anyone can give me some advice or solutions?
有人可以给我一些建议或解决方案吗? Thanks a lot.
非常感谢。 Here's my code.
这是我的代码。
import json
import boto3
def lambda_handler(event, context):
string = "dfghj"
file_name = "hello.txt"
lambda_path = "/tmp/" + file_name
s3_path = "/100001/20180223/" + file_name
with open(lambda_path, 'w+') as file:
file.write(string)
file.close()
s3 = boto3.resource('s3')
s3.meta.client.upload_file(lambda_path, 's3bucket', s3_path)
I've had success streaming data to S3, it has to be encoded to do this:我已经成功地将数据流式传输到 S3,必须对其进行编码才能执行此操作:
import boto3
def lambda_handler(event, context):
string = "dfghj"
encoded_string = string.encode("utf-8")
bucket_name = "s3bucket"
file_name = "hello.txt"
s3_path = "100001/20180223/" + file_name
s3 = boto3.resource("s3")
s3.Bucket(bucket_name).put_object(Key=s3_path, Body=encoded_string)
If the data is in a file, you can read this file and send it up:如果数据在文件中,您可以读取此文件并将其发送:
with open(filename) as f:
string = f.read()
encoded_string = string.encode("utf-8")
My response is very similar to Tim B but the most import part is我的回答与 Tim B 非常相似,但最重要的部分是
1.Go to S3 bucket and create a bucket you want to write to 1.转到S3存储桶并创建一个要写入的存储桶
2.Follow the below steps otherwise you lambda will fail due to permission/access. 2.按照以下步骤操作,否则您的 lambda 将因权限/访问而失败。 I've copied and pasted it the link content here for you too just in case if they change the url /move it to some other page.
我也为您复制并粘贴了此处的链接内容,以防万一他们更改网址/将其移动到其他页面。
a.一个。 Open the roles page in the IAM console.
在 IAM 控制台中打开角色页面。
b.湾。 Choose Create role.
选择创建角色。
c. C。 Create a role with the following properties.
创建具有以下属性的角色。
-Trusted entity – AWS Lambda. - 受信任的实体——AWS Lambda。
-Permissions – AWSLambdaExecute. - 权限 – AWSLambdaExecute。
-Role name – lambda-s3-role. -角色名称 - lambda-s3-role。
The AWSLambdaExecute policy has the permissions that the function needs to manage objects in Amazon S3 and write logs to CloudWatch Logs. AWSLambdaExecute 策略具有该函数管理 Amazon S3 中的对象并将日志写入 CloudWatch Logs 所需的权限。
Copy and past this into your Lambda python function将其复制并粘贴到您的 Lambda python 函数中
import json, boto3,os, sys, uuid from urllib.parse import unquote_plus s3_client = boto3.client('s3') def lambda_handler(event, context): some_text = "test" #put the bucket name you create in step 1 bucket_name = "my_buck_name" file_name = "my_test_file.csv" lambda_path = "/tmp/" + file_name s3_path = "output/" + file_name os.system('echo testing... >'+lambda_path) s3 = boto3.resource("s3") s3.meta.client.upload_file(lambda_path, bucket_name, file_name) return { 'statusCode': 200, 'body': json.dumps('file is created in:'+s3_path) }
from os import path
import json, boto3, sys, uuid
import requests
s3_client = boto3.client('s3')
def lambda_handler(event, context):
bucket_name = "mybucket"
url = "https://i.imgur.com/ExdKOOz.png"
reqponse = requests.get(url)
filenname = get_filename(url)
img = reqponse.content
s3 = boto3.resource("s3")
s3.Bucket(bucket_name).put_object(Key=filenname, Body=img)
return {'statusCode': 200,'body': json.dumps('file is created in:')}
def get_filename(url):
fragment_removed = url.split("#")[0]
query_string_removed = fragment_removed.split("?")[0]
scheme_removed = query_string_removed.split("://")[-1].split(":")[-1]
if scheme_removed.find("/") == -1:
return ""
return path.basename(scheme_removed)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.