简体   繁体   中英

What is Node.js cluster best practice?

Is it better to write our server logic before forking workers or after?

I'll give two examples below to make it clear.

example #1:

const express = require("express");
const cluster = require('cluster');
const app = express();

app.get("/path", somehandler);

if (cluster.Master)
  // forking workers..
else
  app.listen(8000);

or example #2:

const cluster = require('cluster');

if (cluster.Master)
  // forking workers..
else {
  const express = require("express");
  const app = express();

  app.get("/path", somehandler);

  app.listen(8000);
}

What is the difference?

There is no difference. Since when You call cluster.fork() it calls child_process.fork on same entry file and keeps child process handler for interprocess communication.

Read following methods defined at lines following of cluster's master module: 167 , 102 , 51 , 52


Let's get back to Your code:

1) In example #1 it assigns variables, creates app instance both for master and child processes, then checks for process master or not.

2) In example #2 it checks for process master or not and if not it assigns vars, creates app instance and binds listener on port for child workers.


In fact it will do the same operations in clild processes:

1) assigning vars

2) creating app instance

3) starting listener


My own best practices using cluster is has 2 steps:

Step 1 - having custom cluster wrapper in separate module and wrapping in application call:

Have cluster.js file:

'use strict';

module.exports = (callable) => {
  const
    cluster = require('cluster'),
    numCPUs = require('os').cpus().length;

  const handleDeath = (deadWorker) {
    console.log('worker ' + deadWorker.process.pid + ' dead');

    const worker = cluster.fork();
    console.log('re-spawning worker ' + worker.process.pid);
  }

  process.on('uncaughtException',
    (err) => {
      console.error('uncaughtException:', err.message);
      console.error(err.stack);
    });

  cluster.on('exit', handleDeath);

  if (numCPUs === 1 || !cluster.isMaster) {
    return callable();
  }

  console.log('Starting', instances, 'instances');
  for (let i = 0; i < instances; i++, cluster.fork());
};

Keep app.js simple like this for modularity and testability (read about supertest ):

'use strict';

const express = require("express");
const app = express();

app.get("/path", somehandler);

module.exports = app;

Serving the app at some port must be handled by different module, so have server.js look like this:

'use strict';

const start = require('./cluster');

start(() => {

  const http = require('http');
  const app = require('./app');


  const listenHost = process.env.HOST || '127.0.0.1';
  const listenPort = process.env.PORT || 8080;
  const httpServer = http.createServer(app);

  httpServer.listen(listenPort, listenHost,
      () => console.log('App listening at http://'+listenHost+':'+listenPort));
});

You may add in package.json such line in scripts section:

"scripts": {
  "start": "node server.js",
  "watch": "nodemon server.js",
  ...
}

Run the app using:

node server.js , nodemon server.js

or

npm start , npm run watch



Step 2 - when needed containerization:

Keep code structure like in Step 1 and use docker

Cluster module will get cpu resources which provided by container orkestrator

and as an extra You'll have ability to scale docker instances on demand using docker swarm , kubernetes , dc/os and etc.

Dockerfile :

FROM node:alpine

ENV PORT=8080
EXPOSE $PORT

ADD ./ /app
WORKDIR /app

RUN apk update && apk upgrade && \
    apk add --no-cache bash git openssh

RUN npm i
CMD ["npm", "start"]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM