简体   繁体   English

在Heroku上使用cluster和socket.io-redis扩展node.js socket.io@1.*.*

[英]Scale node.js socket.io@1.*.* with cluster and socket.io-redis on Heroku

Does anybody know a good solution to scale a node.js - socket.io based app up over multiple cores? 有谁知道一个很好的解决方案,可以在多个内核上扩展node.js-基于socket.io的应用程序? I am currently testing the solution presented in the socket.io documentation, to use socket.io over multiple nodes, but without a concrete success. 我目前正在测试socket.io文档中提供的解决方案,以在多个节点上使用socket.io,但没有取得具体的成功。

I have created a playground for this on github: https://github.com/liviuignat/socket.io-clusters which is a bit modified copy of the chat application from the socket.io site. 我在github上为此创建了一个游乐场: https : //github.com/liviuignat/socket.io-clusters ,它是来自socket.io站点的聊天应用程序的经过修改的副本。 It uses express , cluster , socket.io@1.1.0 and socket.io-redis . 它使用expressclustersocket.io@1.1.0socket.io-redis

There is currently also an implementation using sticky-session in the branch feature/sticky which seems to work better. 当前,在分支feature/sticky还有一个使用sticky-session的实现似乎更好地工作。

In the end the application needs to be published to Heroku , scaled over multiple dynos. 最后,该应用程序需要发布到Heroku ,并在多个dyno上进行缩放。

Initially I tryied doing something like this - to start the server only for the cluster nodes, but I always get the error: failed: Connection closed before receiving a handshake response 最初,我尝试执行类似的操作-仅为群集节点启动服务器,但始终收到错误消息: 失败:在收到握手响应之前连接已关闭

if (cluster.isMaster) {    
  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }

  cluster.on('exit', function(worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
  });
} else {
  var server = new Server({
      dirName: __dirname,
      enableSocket: true
    })
    .setupApp()
    .setupRoutes()
    .start();
}

Then I tried starting the server also for master nodes: 然后,我也尝试为主节点启动服务器:

if (cluster.isMaster) {
  var server = new Server({
      dirName: __dirname,
      enableSocket: true
    })
    .setupApp()
    .setupRoutes()
    .start();

  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }

  cluster.on('exit', function(worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
  });
} else {
  var server = new Server({
      dirName: __dirname,
      enableSocket: true
    })
    .setupApp()
    .setupRoutes()
    .start();
}

I also tried it using both sticky-session and socket.io-redis in the branch feature/sticky , which seems to perform with success, but still does not seems to be a good solution: 我还在分支feature/sticky同时使用了sticky-sessionsocket.io-redis进行了尝试,这似乎很成功,但似乎仍然不是一个好的解决方案:

if (cluster.isMaster) {
  sticky(function() {
    var server = new Server({
        dirName: __dirname,
        enableSocket: true
      })
      .setupApp()
      .setupRoutes();
    return server.http;
  }).listen(3000, function() {
    console.log('server started on 3000 port');
  });

  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }

  cluster.on('exit', function(worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
  });
} else {
  sticky(function() {
    var server = new Server({
        dirName: __dirname,
        enableSocket: true
      })
      .setupApp()
      .setupRoutes();
    return server.http;
  }).listen(3000, function() {
    console.log('server started on 3000 port');
  });
}

I will do more tests for the next days, but, it would help a lot if anybody could come up with some ideas. 接下来的几天我将做更多测试,但是,如果有人可以提出一些想法,这将对您有很大帮助。

Thanks, 谢谢,

You are probably looking for socket.io-redis. 您可能正在寻找socket.io-redis。 http://socket.io/blog/introducing-socket-io-1-0/ (scroll to 'Scalability') http://socket.io/blog/introducing-socket-io-1-0/ (滚动到“可扩展性”)

Here's a shortened example on how to create the scaffolding with socket.io + express: 这是有关如何使用socket.io + express创建脚手架的简短示例:

var cluster = require('cluster');

var express = require('express')
    , app = express()
    , server = require('http').createServer(app);

var
    io = require('socket.io').listen(server)
    var redis = require('socket.io-redis');
    io.adapter(redis({ host: 'localhost', port: 6379 }));



var workers = process.env.WORKERS || require('os').cpus().length;

/**
 * Start cluster.
 */

if (cluster.isMaster) {

  /**
   * Fork process.
   */

  console.log('start cluster with %s workers', workers-1);
  workers--;
  for (var i = 0; i < workers; ++i) {
    var worker = cluster.fork();
    console.log('worker %s started.', worker.process.pid);
  }

  /**
   * Restart process.
   */

  cluster.on('death', function(worker) {
    console.log('worker %s died. restart...', worker.process.pid);
    cluster.fork();
  });


} else {
  server.listen(process.env.PORT || 9010);
}

Redis has pub/sub and all socket.io nodes need to subscribe to redis to get all messages from a channel. Redis具有pub / sub,并且所有socket.io节点都需要订阅redis才能从通道获取所有消息。 This way one process can broadcast a message to a channel (publish) and all other processes receive the messages with minimal latency to broadcast them to their connected clients (subscribe). 这样,一个进程可以将消息广播到通道(发布),而所有其他进程以最小的延迟接收消息,以将消息广播到其连接的客户端(订阅)。 You can even extend this with redis based sessions. 您甚至可以使用基于Redis的会话来扩展它。

The cluster module you are referring to is a bit misleading in my opinion. 我认为您所指的集群模块有点误导。 It helps to create invidiual sub-processes as far as I understand the concept, but doesn't 'synchronize' the channels across multiple nodes. 据我所知,它有助于创建单个子流程,但不会“同步”多个节点之间的通道。 If your clients to not need to communicate with others it's fine. 如果您的客户不需要与他人交流,那就很好。 If you want to broadcast messages to all connected clients on all nodes you need the redis module. 如果要向所有节点上的所有已连接客户端广播消息,则需要redis模块。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM