简体   繁体   English

如何将AWS S3文件缓存在node.js服务器上并根据请求提供服务?

[英]How to cache an AWS S3 file on a node.js server and serve it on request?

For my website, I am deploying all the assets (fonts/images/js etc) to an S3 bucket . 对于我的网站,我将所有assets (fonts / images / js等)部署到S3 bucket The index.html (a single-page Ember.js application), is deployed on an elastic beanstalk node.js server. index.html (单页Ember.js应用程序)已部署在弹性beantalk node.js服务器上。 The node app.js takes any request to www.domain.com/* and serves the locally stored index.html . 节点app.js接受对www.domain.com/*任何请求,并提供本地存储的index.html I would like to be able to cut out the process of deploying a new application to elastic beanstalk for every production build and simply deploy all assets and the index.html to the S3 bucket . 我希望能够减少为每个生产版本将新应用程序部署到Elastic beantalk的过程,并只需将所有资产和index.html部署到S3 bucket

This is what I have so far: 这是我到目前为止的内容:

var AWS = require('aws-sdk'),
    fs = require('fs');
/*
 * AWS Security credentials
 */
AWS.config.loadFromPath('./config.json');

var port = process.env.PORT || 3000,
    http = require("http");

var static = require('node-static');


/*
* Create a node-static server instance
* to serve the './public' folder
*/
var file = new static.Server();

/*
 * Fetch .index.html from S3
 * and cache it locally
 */
var s3 = new AWS.S3();
var params = {Bucket: 'assets', Key: 'index.html'};
var file = fs.createWriteStream('index.html');

s3.getObject(params).
    on('httpData', function(chunk) { file.write(chunk); }).
    on('httpDone', function() { file.end(); }).
    send();

var server = http.createServer(function (request, response) {
    request.addListener('end', function () {
        file.serveFile('index.html', 200, {}, request, response);
    }).resume();
}).listen(port);

This I assume will only get the index.html from S3 when the server first fires up. 我认为这只会在服务器首次启动时从S3获取index.html What would be the best practice for caching, preferably with a 1 minute expiry. 缓存的最佳实践是什么,最好是1分钟过期。

Thanks! 谢谢!

Have a look at Amazon's CloudFront. 看看亚马逊的CloudFront。 Sounds familiar for what you're trying to accomplish, namely that the files wouldn't have to go through your server again. 听起来很熟悉您要完成的工作,即文件不必再次通过服务器。 Adds a little to the round-trip of your full page load. 给整个页面加载的往返行程增加了一点。

That said, to cache locally, you could store the entire file in Redis (or other quick thing like that, Raik , memcache , etc.). 也就是说,要在本地缓存,您可以将整个文件存储在Redis (或其他快速的东西,例如Raikmemcache等)。

  1. Run Redis on your server 在服务器上运行Redis
  2. store file into Redis when pulled from S3 从S3拉出文件时将文件存储到Redis中
  3. Set expiratory time after saving to Redis 设置为Redis后设置呼气时间
  4. Check Redis for custom key before re-pulling from S3 从S3重新拉出之前,请检查Redis以获取自定义密钥
    • If it exists, use it and reset the timeout 如果存在,请使用它并重置超时
    • If not, repull from S3 (storing in Redis and setting timeout) 如果不是,请从S3重新推入(存储在Redis中并设置超时)

I am unsure how this would respond if the files were large, but it would still be faster than pulling from S3 each time. 我不确定如果文件很大,它将如何响应,但是它仍然比每次从S3提取速度更快。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM