简体   繁体   中英

upload a dataframe as a zipped csv directly to s3 without saving it on the local machine

How can I upload a data frame as a zipped csv into S3 bucket without saving it on my local machine first?

I have the connection to that bucket already running using:

self.s3_output = S3(bucket_name='test-bucket', bucket_subfolder='')

We can make a file-like object with BytesIO and zipfile from the standard library.

# 3.7
from io import BytesIO
import zipfile

# .to_csv returns a string when called with no args
s = df.to_csv()

with zipfile.ZipFile(BytesIO(), mode="w",) as z:
  z.writestr("df.csv", s)
  # upload file here

You'll want to refer to upload_fileobj in order to customize how the upload behaves.

yourclass.s3_output.upload_fileobj(z, ...)

This works equally well for zip and gz:

import boto3
import gzip
import pandas as pd
from io import BytesIO, TextIOWrapper


s3_client = boto3.client(
        service_name = "s3",
        endpoint_url = your_endpoint_url,
        aws_access_key_id = your_access_key,
        aws_secret_access_key = your_secret_key
    
    
# Your file name inside zip

your_filename = "test.csv"
    
s3_path = f"path/to/your/s3/compressed/file/test.zip"
    
bucket = "your_bucket"
    
df = your_df
    
    
gz_buffer = BytesIO()


with gzip.GzipFile(   
    
    filename = your_filename,
    mode = 'w', 
    fileobj = gz_buffer ) as gz_file:

    
    df.to_csv(TextIOWrapper(gz_file, 'utf8'), index=False)
    
    
    s3.put_object(
        Bucket=bucket, Key=s3_path, Body=gz_buffer.getvalue()
    )

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM