简体   繁体   中英

Is there a way to get location (public url) of S3 object using AWS CLI?

< premise>
I'm new cloud computing in general, AWS specifically, and REST API, and am trying to cobble together a "big-picture" understanding.

I am working with LocalStack - which, by my understanding, simulates the real AWS by responding identically to (a subset of) the AWS API if you specify the endpoint address/port that LocalStack listens at.

Lastly, I've been working from this tutorial: https://dev.to/goodidea/how-to-fake-aws-locally-with-localstack-27me
< /premise>

Using the noted tutorial, and per its guidance, I successfully creating a S3 bucket using the AWS CLI.
To demonstrate uploading a local file to the S3 bucket, though, the tutorial switches to node.js , which I think demonstrates the AWS node.js SDK:

# aws.js
# This code segment comes from https://dev.to/goodidea/how-to-fake-aws-locally-with-localstack-27me
#

const AWS = require('aws-sdk')
require('dotenv').config()

const credentials = {
   accessKeyId: process.env.AWS_ACCESS_KEY_ID,
   secretAccessKey: process.env.AWS_SECRET_KEY,
}

const useLocal = process.env.NODE_ENV !== 'production'

const bucketName = process.env.AWS_BUCKET_NAME

const s3client = new AWS.S3({
   credentials,
   /**
    * When working locally, we'll use the Localstack endpoints. This is the one for S3.
    * A full list of endpoints for each service can be found in the Localstack docs.
    */
   endpoint: useLocal ? 'http://localhost:4572' : undefined,
   /**
     * Including this option gets localstack to more closely match the defaults for
     * live S3. If you omit this, you will need to add the bucketName to the `Key`
     * property in the upload function below.
     *
     * see: https://github.com/localstack/localstack/issues/1180
     */
   s3ForcePathStyle: true,
})


const uploadFile = async (data, fileName) =>
   new Promise((resolve) => {
      s3client.upload(
         {
            Bucket: bucketName,
            Key: fileName,
            Body: data,
         },
         (err, response) => {
            if (err) throw err
            resolve(response)
         },
      )
   })

module.exports = uploadFile

.

# test-upload.js
# This code segment comes from https://dev.to/goodidea/how-to-fake-aws-locally-with-localstack-27me
#

const fs = require('fs')
const path = require('path')
const uploadFile = require('./aws')

const testUpload = () => {
   const filePath = path.resolve(__dirname, 'test-image.jpg')
   const fileStream = fs.createReadStream(filePath)
   const now = new Date()
   const fileName = `test-image-${now.toISOString()}.jpg`
   uploadFile(fileStream, fileName).then((response) => {
      console.log(":)")
      console.log(response)
   }).catch((err) => {
      console.log(":|")
      console.log(err)
   })
}

testUpload()

Invocation :

$ node test-upload.js
:)
{ ETag: '"c6b9e5b1863cd01d3962c9385a9281d"',
  Location: 'http://demo-bucket.localhost:4572/demo-bucket/test-image-2019-03-11T21%3A22%3A43.511Z.jpg',
  key: 'demo-bucket/test-image-2019-03-11T21:22:43.511Z.jpg',
  Key: 'demo-bucket/test-image-2019-03-11T21:22:43.511Z.jpg',
  Bucket: 'demo-bucket' }

I do not have prior experience with node.js, but my understanding of the above code is that it uses the AWS.S3.upload() AWS node.js SDK method to copy a local file to a S3 bucket, and prints the HTTP response (is that correct?).

Question : I observe that the HTTP response includes a "Location" key whose value looks like a URL I can copy/paste into a browser to view the image directly from the S3 bucket; is there a way to get this location using the AWS CLI?

Am I correct to assume that AWS CLI commands are analogues of the AWS SDK?

I tried uploading a file to my S3 bucket using the aws s3 cp CLI command, which I thought would be analogous to the AWS.S3.upload() method above, but it didn't generate any output, and I'm not sure what I should have done - or should do - to get a Location the way the HTTP response to the AWS.S3.upload() AWS node SDK method did.

$ aws --endpoint-url=http://localhost:4572 s3 cp ./myFile.json s3://myBucket/myFile.json
upload: ./myFile.json to s3://myBucket/myFile.json

Update : continued study makes me now wonder whether it is implicit that a file uploaded to a S3 bucket by any means - whether by CLI command aws s3 cp or node.js SDK method AWS.S3.upload() , etc. - can be accessed at http://<bucket_name>.<endpoint_without_http_prefix>/<bucket_name>/<key> ? Eg http://myBucket.localhost:4572/myBucket/myFile.json ?
If this is implicit , I suppose you could argue it's unnecessary to ever be given the "Location" as in that example node.js HTTP response.
Grateful for guidance - I hope it's obvious how painfully under-educated I am on all the involved technologies.


Update 2 : It looks like the correct url is <endpoint>/<bucket_name>/<key> , eg http://localhost:4572/myBucket/myFile.json .

AWS CLI and the different SDKs offer similar functionality but some add extra features and some format the data differently. It's safe to assume that you can do what the CLI does with the SDK and vice-versa. You might just have to work for it a little bit sometimes.

As you said in your update, not every file that is uploaded to S3 is publicly available. Buckets have policies and files have permissions. Files are only publicly available if the policies and permissions allow it.

If the file is public then you can just construct the URL as you described. If you have the bucket setup for website hosting, you can also use the domain you setup.

But if the file is not public or you just want a temporary URL, you can use aws presign s3://myBucket/myFile.json . This will give you a URL that can be used by anyone to download the file with the permissions of whoever executed the command. The URL will be valid for one hour unless you choose a different time with --expires-in . The SDK has similar functionality as well but you have to work a tiny bit harder to use it.

Note: Starting with version 0.11.0, all APIs are exposed via a single edge service, which is accessible on http://localhost:4566 by default.

Considering that you've added some files to your bucket

aws --endpoint-url http://localhost:4566 s3api list-objects-v2 --bucket mybucket
{
    "Contents": [
        {
            "Key": "blog-logo.png",
            "LastModified": "2020-12-28T12:47:04.000Z",
            "ETag": "\"136f0e6acf81d2d836043930827d1cc0\"",
            "Size": 37774,
            "StorageClass": "STANDARD"
        }
    ]
}

you should be able to access your file with

http://localhost:4566/mybucket/blog-logo.png

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM