I'm trying to save a csv file directly into a S3
bucket. Howerver, since I didn't find a way to do that directly (without using pandas
dataframe), I save it into a my local machine, upload it into S3
then remove it from the local machine , here is what my code looks like:
data= [[None, u'https://aaa.com/p/cat.do?tags=|5555|4444|8888&zzz;A=3', u'00.00.00.000, 00.00.00.000', u'25359725', u'2018-03-06 18:01:18', u'DC01F54GH8D.aa201', None, 1814498434, 765651, u'2018-03-12 18:01:18', 168, 0, u'2018-03-12 18:32:08.428032'], [None, u'https://aaa.com/p/cat.do?tags=|5555|4444|8888&zzz;A=', u'00.00.00.000, 00.00.00.000', u'10707456', u'2018-03-06 18:01:02', u'76FD86AA.abd', None, 1814498440, 760960, u'2018-03-12 18:01:02', 168, 0, u'2018-03-12 18:32:08.805207']]
s3_resource = boto3.resource('s3')
filename="#"+str(datetime.utcnow()).replace(" ","_").replace(":","")+".csv"
with open(filename, "wb") as f:
writer = csv.writer(f)
writer.writerows(data)
s3_resource.Object('my_bucket', filename).put(Body=open(filename, 'rb'))
os.remove(filename)
The data list mentioned above is just a small sample of the real data. Unfortunately, this raised this error:
writer.writerows(data)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 152-153: ordinal not in range(128)
I tried to change:
with open(filename, "wb") as f:
by
with open(filename, "wb", errors='ignore') as f:
But I got this:
'errors' is an invalid keyword argument for this function
Can I get some help with this encoding problem please ? and if any one knows a better way to save csv file into s3, please share your solution with us.
Many Thanks.
I'm using python 2.7
If you wish to upload a file to S3 using boto3 , the easier method would be:
import boto3
client = boto3.client('s3')
client.upload_file('/tmp/hello.txt', 'mybucket', 'hello.txt')
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.