简体   繁体   中英

How to copy json file to Amazon S3 using Python

I am experimenting with writing a json file to AWS S3. Below is the sample code. This is the file I want to write 'fileNew.json'. And 'fileOld.json' is an existing file in S3, which I included in the code by mistake and shouldnt be in the code.

df.to_json('fileNew.json', orient='records',lines=True)

os.system('aws s3 cp fileNew.json s3://sbx-myproject/fileOld.json --sse')

Will the above command replace the existing file ? OR will it be just unsuccessful in creating the new file ?

If a file already exists it will be automatically overwritten. So yes fileOld.json will be replaced with the file you are uploading.

While your code will work, it is recommended you use the AWS SDK instead of executing shell commands.

import boto3

data = open('fileNew.json', 'rb')
s3 = boto3.resource('s3')
s3.Bucket('sbx-myproject').put_object(Key='fileOld.json', Body=data)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM