简体   繁体   中英

How to make a ZeroMQ publisher redundant in Node.js?

Is there a common pattern that is used to make the publisher in the 0mq pub/sub redundant in node? The motivation is to be able to run multiple processes with publishers that could fail / be restarted periodically.

My initial thought is to create aa forwarder in the master and connect to it from the worker publishers:

var cluster = require('cluster')
  , zmq = require('zmq')
  , endpointIn = 'ipc:///tmp/cluster_pub_sub'
  , endpointOut = 'tcp://127.0.0.1:7777';

if (cluster.isMaster) {
  for (var i = 0; i < 2; i++) cluster.fork();
  startPubSubForwarder();
} else {
  startPublisher();
}

function startPublisher() {
  var socket = zmq.socket('pub');
  socket.connect(endpointIn);
  setInterval(function () {
    socket.send('pid=' + process.pid);
  }, 1000);
}

function startPubSubForwarder() {
  var sIn = zmq.socket('sub')
    , sOut = zmq.socket('pub');

  // incoming
  sIn.subscribe('');
  sIn.bind(endpointIn, function (err) {
    if (err) throw err;
  });
  sIn.on('message', function (data) {
    sOut.send(data);
  });

  // outgoing
  sOut.bind(endpointOut, function (err) {
    if (err) throw err;
  });
}

Are there other / better ways of doing this?

If your concern is message durability, the I would think you'd be less concerned about having multiple publishers and more concerned about making sure that messages aren't lost when your publisher dies. You can simply restart the publisher immediately, and pick up where it left off. You also need to know which messages have been successfully sent.

What this requires is 1) persistent storage and 2) and a means of acknowledging to the publisher that the message was received (and possibly that processing completed) on the receiver's end. This setup should solve your reliability wants.

If you also want to accomplish high scale, then you need to augment the architecture a bit. It's more straightforward for send/receive scenarios where the sender and receiver are 1:1, and a little more complex when you need to do a 1:N round-robin/load distribution scenario and that is probably what you need for scale.

My input on accomplishing the scale-out scenario is to have a the following setup:

sender_process--(1:1)-->distributor--(1:N)-->receiver_process(es)

where the distributor acknowledges receipt of the message from sender and then fans out to the receiver processes.

You'll likely want to accomplish this by putting a queue in front of each of these processes. So, you don't send to the process. You send to the queue that the process reads from. Sender puts stuff on the distributor queue. The distributor puts stuff on the receiver's queues. At each point, each process attempts to process. If it fails past a number of max retries, it goes on an error queue.

We use rabbitmq / amqp to do all this. I've started open sourcing the bus we use to do the 1:1 and 1:N sending here: https://github.com/mateodelnorte/servicebus . I'll be adding a readme and more tests over the next few days.

From your example code, I think the XPUB/XSUB 0MQ pattern is your best fit. It is a more efficient way to achieve the same "startPubSubForwarder()" block, plus your subscriber side gets the benefit of being able to subscribe to certain patterns directly on the publishers backend side. Here I left a link with an example of publishers/xpub-xsub (in a proxy fashion)/subscriptors: https://github.com/krakatoa/node_zmq_workshop/tree/master/03.3_news_proxy . It is NodeJS code (and it is mine, don't mind to ask details!).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM