简体   繁体   中英

grunt/npm plugin grunt-s3 + knox not uploading to S3

I'm trying to integrate an S3 deployment step into my Grunt toolchain to upload the newly built file out to AWS. However, the step always fails silently (claims to succeed but doesn't do anything), and while debugging the results I've found a few different points along the way that are getting hung up. I'm using grunt-s3 as the package that handles the grunt commands, which in turn calls the knox package which wraps Amazon's S3 API.

Here's where things are falling apart:

1) There's a point in the logic where knox uses the fs package to try to get the size of the file it's about to upload via fs.stat(file, callback). Near as I can tell, the process dies somewhere under the node.js layer between the fs.stat call and the callback getting invoked. I have set breakpoints and 'debugger' statements all over the place in the callback logic and neither node-inspector nor the IntelliJ debugger can seem to catch the process after fs.stat() is called.

2) If I hack the knox plugin and change the fs.stat call to fs.statSync(), the process successfully moves forward. However, later in the process I can see knox set up the expected PUT URL with S3 to upload the file and then call stream.pipe() to upload the file. Nothing seems to happen as a result of the stream.pipe() call, and I can't see any activity on WireShark that indicates an upload between my computer and AWS taking place. However, if I use the command line tool s3cmd to do the upload, the file uploads fine.

I'm about ready to ditch grunt for this step and move to directly invoking s3cmd, but I'd love to do it the grunt way if possible. Anyone have any suggestions as to what might be happening during these two steps?

Thanks!

are you sitting behind i proxy? if so, knox will not work. if not, how does your s3-config look like?

another important thing to notice is the location of your bucket. manually setting the region (in my example "eu-west-1") helped for me, because knox sets region to "us-standard" per default. see a list of possible values here , check in your bucket-properties where yours is located, and set that value manually!

here a (for me) working config:

s3: {
options: {
  key: "my-key",
  secret: "my-secret",
  access: "public-read",
  bucket: "my-bucket",
  region: "eu-west-1"
},
mysubtask: {
  upload: [
    {
      src: "src/*.js",
      dest: "/dist/",
      gzip: true,
      headers: {
        'Content-Type': 'application/javascript; charset=utf-8'
      }
    }
  ]
}

}

i would recommend using the aws cli apis for achieving this. All you got to do is to configure aws cli and grunt-shell

After setting up you can use shell commands to sync the file. check below code snippet

  /**
   * Commands for copying assets from local to s3 bucket.
   * here we are using `aws s3 sync` command instead of `aws s3 cp`
   */
  let commands = [
    'echo "####### sync started #######"',
    'aws s3 sync ./www s3://bucketName/path --acl public-read',
    'echo "####### sync completed #######"'
  ];

  /**
   * Invalidate cache only when env is production
   */
  if(env === "production") {
    commands.push('echo "####### cache invalidation started #######"');
    commands.push('aws cloudfront create-invalidation --distribution-id {distribution_id} --paths "/*"');
  }

  grunt.config.set("shell", {
    s3sync: {
      command: commands.join('&&')
    }
  });

  grunt.loadNpmTasks('grunt-shell')

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM