[英]Convert Pandas Dataframe to Image and Upload Directly to S3 Bucket Without Saving Locally Using Python
[英]upload a dataframe as a zipped csv directly to s3 without saving it on the local machine
如何將數據幀作為壓縮的csv 上傳到 S3 存儲桶中,而無需先將其保存在本地計算機上?
我已經使用以下方法連接到該存儲桶:
self.s3_output = S3(bucket_name='test-bucket', bucket_subfolder='')
我們可以使用標准庫中的 BytesIO 和 zipfile 制作一個類似文件的對象。
# 3.7
from io import BytesIO
import zipfile
# .to_csv returns a string when called with no args
s = df.to_csv()
with zipfile.ZipFile(BytesIO(), mode="w",) as z:
z.writestr("df.csv", s)
# upload file here
您需要參考upload_fileobj以自定義上傳的行為方式。
yourclass.s3_output.upload_fileobj(z, ...)
這同樣適用於 zip 和 gz:
import boto3
import gzip
import pandas as pd
from io import BytesIO, TextIOWrapper
s3_client = boto3.client(
service_name = "s3",
endpoint_url = your_endpoint_url,
aws_access_key_id = your_access_key,
aws_secret_access_key = your_secret_key
# Your file name inside zip
your_filename = "test.csv"
s3_path = f"path/to/your/s3/compressed/file/test.zip"
bucket = "your_bucket"
df = your_df
gz_buffer = BytesIO()
with gzip.GzipFile(
filename = your_filename,
mode = 'w',
fileobj = gz_buffer ) as gz_file:
df.to_csv(TextIOWrapper(gz_file, 'utf8'), index=False)
s3.put_object(
Bucket=bucket, Key=s3_path, Body=gz_buffer.getvalue()
)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.