I am using AWS ec2 instance. On this instance I'm resulting some files. These operations are done by user data.
Now I want to store those files on s3 by writing code in user data itself.
Using the most recent AWS CLI ( http://aws.amazon.com/cli/ ) you can use the following commands to copy files from your Ec2 Instance or even you local machine to S3 storage.
aws s3 cp myfolder s3://mybucket/myfolder --recursive
You'll then get something like:
upload: myfolder/file1.txt to s3://mybucket/myfolder/file1.txt
upload: myfolder/subfolder/file1.txt to s3://mybucket/myfolder/subfolder/file1.txt
If this is your first usage of the aws
CLI tool then you'll need to run:
aws configure
This will ask you to enter your access key & secret along with specifying a default region.
There are a number of ways to send files to S3. I've listed them below along with installation and documentation where relevant.
S3CMD : ( http://s3tools.org/s3cmd ) You can install this on debian/ubuntu easily via apt-get install s3cmd, then run from command line. You could incorporate this into a bash script or your program.
S3FS : ( http://www.pophams.com/blog/howto-setups3fsonubuntu1104x64 and https://code.google.com/p/s3fs/wiki/InstallationNotes ) ... This mounts an s3 bucket, so that it looks just like a local disk. It takes a little more effort to setup, but once the disk is mounted, you don't need to do anything special to get the files in your bucket.
If you use a CMS (lets use Drupal as an example) you may have the option of using a module to handle access to your bucket eg http://drupal.org/project/storage_api
Finally, you can use programming language implementations to handle all the logical yourself, for PHP you can start with this http://undesigned.org.za/2007/10/22/amazon-s3-php-class and see documentation here http://undesigned.org.za/2007/10/22/amazon-s3-php-class/documentation
An example of the PHP implementation:
<?php
// Simple PUT:
if (S3::putObject(S3::inputFile($file), $bucket, $uri, S3::ACL_PRIVATE)) {
echo "File uploaded.";
} else {
echo "Failed to upload file.";
}
?>
An example of s3cmd:
s3cmd put my.file s3://bucket-url/my.file
Another option worth mention is the AWS CLI http://aws.amazon.com/cli/ This is widely available, for example it's already included on AmazonLinux and can be downloaded via Python (which is installed on many systems including linux and windows).
http://docs.aws.amazon.com/cli/latest/reference/s3/index.html
Available commands, cp ls mb mv rb rm sync website
http://docs.aws.amazon.com/cli/latest/reference/s3api/index.html for interacting with S3
Install s3cmd Package as:
yum install s3cmd
or
sudo apt-get install s3cmd
depending on your OS. Then copy data with this:
s3cmd get s3://tecadmin/file.txt
also ls
can list the files.
For more detils see this
I'm using s3cmd to store nightly exported database backup files from my ec2 instance. After configuration of s3cmd, which you can read about at their site, you can then run a command like:
s3cmd put ./myfile s3://mybucket
Use s3cmd
for that:
s3cmd get s3://AWS_S3_Bucket/dir/file
See how to install s3cmd here :
This works for me...
All attempts to mount s3 as a pseudo filesystem are problematic. It's an object store, not a block device. If you must mount it because you have legacy code that must have local file paths, try goofys. It's about 50x faster than s3fs. https://github.com/kahing/goofys
s3cmd is a bit long in the tooth these days. The AWS cli is a better option these days. The syntax is a bit less convenient, but it's one less tool you need to keep around.
If you can stick to http access. It'll make you life easier in the long run.
在 AWS CLI 上,我使用以下命令将 zip 文件从 EC2 实例复制到 S3
aws s3 cp file-name.zip s3://bucket-name/
I think the best answer in general is in fact above, to use the aws
command, but for the cases where you don't want to bother installing anything else, it's also worth mentioning that you can just download the file over HTTPS, eg open a browser and navigate to:
https://s3.amazonaws.com/
(bucketName) /
(relativePath) /
(fileName)
That also means you could just use wget
or curl
to do transfer from shell prompts.
Im assuming you need to copy from a new instance to s3. First create a IAM role so you dont need to run aws configure and this should all work at launch time. Second Install the cli and then define your copy job using aws cli in user data. Example below for Ubuntu 18. Assign the IAM role to your instance.
Userdata:
#!/bin/bash
apt-get update -y
apt install awscli -y
aws s3 cp *Path of data* s3://*destination bucket* -recursive *--other options*
To create an IAM role 1. Go to the IAM console at https://console.aws.amazon.com/iam/ 2. In the left pane, select Roles then click Create role. 3. For Select type of trusted entity, choose AWS service. Select EC2. Select Next: Permissions. 4. For Attach permissions policies, choose the AWS managed policies that contain the required permissions or create a custom policy. 5. Click Service Choose A service, type S3 in Find a service box, click S3, select actions (all or read + write and others you may need) 6. Click on Resources, Select Resource (you can enter all resouces or limit to specific bucket with the ARN) 7. Click Next: Review policy. Enter an Name and Description. Click Create policy. 8. Return to Create Role page, click refresh, filter policy by name you assigned, select the policy. 9. Click Next: Tags and then add any required tags 10. On the Review page, enter a name and description for the role and click Create role.
References
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.