I want to download a file received from a http url, directly into an amazon s3 bucket, instead of local system. I run python on a 64 bit windows os.
I tried providing the Amazon S3's bucket url as the second argument of urlretrieve function of python during the file extract.
urllib.request.urlretrieve(url, amazon s3 bucket url)
I expected it to upload the file directly to s3, however it fails with filenotFound error , which , after some thought makes sense.
It appears that you want to run a command on a Windows computer (either local or running on Amazon EC2) that will copy the contents of a page identified by a URL directly onto Amazon S3.
This is not possible. There is no API call for Amazon S3 that retrieves content from a different location.
You will need to download the file from the Internet and then upload it to Amazon S3. The code would look something like:
import boto3
import urllib.request
urllib.request.urlretrieve('http://example.com/hello.txt', '/tmp/hello.txt')
s3 = boto3.client('s3')
s3.upload_file('/tmp/hello.txt', 'mybucket', 'hello.txt')
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.