I want to create a set of folders inside which i want to upload my file in s3 bucket. However, i am not getting the required file names. This is my code
s3 = boto3.resource('s3')
def upload_to_aws(local_file, bucket, s3_file):
s3 = boto3.client('s3', aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY)
try:
s3.upload_file(local_file, bucket, s3_file)
print("Upload Successful")
return True
except FileNotFoundError:
print("The file was not found")
return False
except NoCredentialsError:
print("Credentials not available")
return False
customer_name = "demo"
date = datetime.now().strftime('%d')
month = datetime.now().month
year = datetime.now().year
stack_path = "/home/ubuntu/pano/stack"
for file in sorted(os.listdir(stack_path)):
print(f"Uploading {file}")
folders_path = f"{customer_name}/{year}/{month}/{date}"
uploaded = upload_to_aws(f'/home/ubuntu/pano/stack/{file}', 'bucket-name1', '%s/%s' % (folders_path, file))
I want the folders to be customer_name/year/month/date inside which i want the files ot be uploaded. However the folders i am getting are "customer_name/ " "year/" "month/" "date/" I do not want the backslash in the folder name. how do i do this?
Edit
The folders i want has the following path - year/month/date/filename.txt However the path that is being created have the following folder names year//month//date//filename.txt
Each of the folder names has a backslash attached to it. Thesea re the directories i see in my bucket
I want to avoid the backslash along with the names
This is just a representation to show what is a "file" or what is a "folder".
But remember, there is no concept of folder, it is just an object key.
See below I am using your code. It succeeded to upload the files to bucket.
Python 3.8.0 (default, Feb 25 2021, 22:10:10)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> import boto3
>>> from datetime import datetime
>>>
>>> def upload_to_aws(local_file, bucket, s3_file):
... print("###", local_file, bucket, s3_file)
... s3 = boto3.client('s3')
...
... try:
... s3.upload_file(local_file, bucket, s3_file)
... print("Upload Successful")
... return True
... except FileNotFoundError:
... print("The file was not found")
... return False
... except NoCredentialsError:
... print("Credentials not available")
... return False
...
>>> customer_name = "demo"
>>> date = datetime.now().strftime('%d')
>>> month = datetime.now().month
>>> year = datetime.now().year
>>>
>>>
>>> stack_path = "/tmp/test"
>>> for file in sorted(os.listdir(stack_path)):
... print(f"Uploading {file}")
... folders_path = f"{customer_name}/{year}/{month}/{date}"
... uploaded = upload_to_aws(f'/tmp/test/{file}', 'test-bucket', '%s/%s' % (folders_path, file))
...
Uploading file1
### /tmp/test/file1 test-bucket demo/2021/4/22/file1
Upload Successful
Uploading file2
### /tmp/test/file2 test-bucket demo/2021/4/22/file2
Upload Successful
Uploading file3
### /tmp/test/file3 test-bucket demo/2021/4/22/file3
Upload Successful
>>> exit()
Now let's see how to get the file. Note the key of the file is exactly what you expect, so not double /
on it.
Python 3.8.0 (default, Feb 25 2021, 22:10:10)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto3
>>> s3 = boto3.client('s3')
>>> s3.get_object(Bucket='test-bucket', Key='demo/2021/4/22/file1')
{'ResponseMetadata': {'RequestId': 'RE...', 'HostId': 'eMmV3p+...', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-id-2': 'eMm...', 'x-amz-request-id': 'RE..', 'date': 'Thu, 22 Apr 2021 13:45:33 GMT', 'last-modified': 'Thu, 22 Apr 2021 13:41:09 GMT', 'etag': '"355..."', 'accept-ranges': 'bytes', 'content-type': 'binary/octet-stream', 'content-length': '11', 'server': 'AmazonS3'}, 'RetryAttempts': 0}, 'AcceptRanges': 'bytes', 'LastModified': datetime.datetime(2021, 4, 22, 13, 41, 9, tzinfo=tzutc()), 'ContentLength': 11, 'ETag': '"35..."', 'ContentType': 'binary/octet-stream', 'Metadata': {}, 'Body': <botocore.response.StreamingBody object at 0x7f2b2ea8ddf0>}
From AWS CLI. On the first output you can see the /
in the end, but again it is just a representation, look the full object key below.
$ aws s3 ls s3://test-bucket/demo
PRE demo/
$ aws s3 ls s3://test-bucket/demo --recursive
2021-04-22 10:41:09 11 demo/2021/4/22/file1
2021-04-22 10:41:09 11 demo/2021/4/22/file2
2021-04-22 10:41:10 11 demo/2021/4/22/file3
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.