I have an s3 bucket my-bucket
and a python script, where I want to create another folder ( training_data
) within the bucket including a text file. I was told to use s3fs
, but I can't get it to work so far and I find the documentation to be rather non-intuitive.
What I am trying is the following:
import os
import s3fs
s3 = s3fs.S3FileSystem(anon=False)
path = 's3://my-bucket/training_data/'
if not os.path.exists(path):
os.makedirs(path)
Unfortunately this doesn't work, as it just creates a folder locally. I already configured AWS credentials by the way. Can anyone help me?
S3 is an object storage, it has been designed as a Key-Value store where the key is the full name of the file and the content of the file is the Object.
However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects. Amazon S3 does this by using a shared name prefix for objects (that is, objects have names that begin with a common string, /
by default). Object names are also referred to as key names.
I would recommend you to use boto3
package:
import boto3
s3_client = boto3.client(service_name='s3', aws_access_key_id=access_key,
aws_secret_access_key=secret_key)
with open(source_file_path, 'rb') as f:
self.s3_client.put_object(Bucket=bucket, Body=f, Key=s3_prefix)
where:
source_file_path
- the path of the file that you would like to upload.
s3_prefix
- the desired key name in s3
You can just write the file to S3. It will handle creating the folder for you.
Python's os
library is for local file system. Recommended to use boto3 library for using a put_object
API.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.