简体   繁体   English

使用boto3将csv文件保存到s3

[英]saving csv file to s3 using boto3

I am trying to write and save a CSV file to a specific folder in s3 (exist).我正在尝试将 CSV 文件写入并保存到 s3 中的特定文件夹(存在)。 this is my code:这是我的代码:

from io import BytesIO
import pandas as pd
import boto3
s3 = boto3.resource('s3')

d = {'col1': [1, 2], 'col2': [3, 4]}
df = pd.DataFrame(data=d)

csv_buffer = BytesIO()

bucket = 'bucketName/folder/'
filename = "test3.csv"
df.to_csv(csv_buffer)
content = csv_buffer.getvalue()

def to_s3(bucket,filename,content):
  s3.Object(bucket,filename).put(Body=content)

to_s3(bucket,filename,content)

this is the error that I get:这是我得到的错误:

Invalid bucket name "bucketName/folder/": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$"

I also tried :我也试过:

bucket = bucketName/folder

and:和:

bucket = bucketName
key = folder/
s3.Object(bucket,key,filename).put(Body=content)

Any suggestions?有什么建议吗?

Saving into s3 buckets can be also done with upload_file with an existing .csv file:也可以使用带有现有 .csv 文件的upload_file保存到 s3 存储桶中:

import boto3
s3 = boto3.resource('s3')

bucket = 'bucket_name'
filename = 'file_name.csv'
s3.meta.client.upload_file(Filename = filename, Bucket= bucket, Key = filename)

This should work这应该工作

def to_s3(bucket,filename, content):
    client = boto3.client('s3')
    k = "folder/subfolder"+filename
    client.put_object(Bucket=bucket, Key=k, Body=content)

This should work:这应该有效:

bucket = bucketName
key = f"{folder}/{filename}"
csv_buffer=StringIO()
df.to_csv(csv_buffer)
content = csv_buffer.getvalue()
s3.put_object(Bucket=bucket, Body=content,Key=key)

AWS bucket names are not allowed to have slashes ("/"), which should be part of Key. AWS 存储桶名称不允许有斜杠 ("/"),它应该是 Key 的一部分。 AWS uses slashes to show "virtual" folders in the dashboard. AWS 使用斜杠在仪表板中显示“虚拟”文件夹。 Since csv is a text file I'm using StringIO instead of BytesIO由于 csv 是一个文本文件,我使用 StringIO 而不是 BytesIO

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM