简体   繁体   English

这是一个好的服务器的负载平衡系统吗?

[英]Is this a good server's load balancing system?

I'm new to this concept and I'm thinking on how to horizontally scale my Xepler Node.js framework. 我是这个概念的新手,正在考虑如何水平扩展Xepler Node.js框架。

So, the main app on the master server will proxy the request to the first cluster in a queue (maybe retrieved using a shared-memory with redis). 因此,主服务器上的主应用程序会将请求代理到队列中的第一个集群(可能使用带Redis的共享内存来检索)。 Each X requests (depending on the server capabilities, X is decided by me, maybe using a map), the cluster will be moved to the last place in this queue. 每个X请求(取决于服务器的能力,X由我决定,也许使用映射),群集将移到此队列中的最后一个位置。 In this way, all the clusters will receive only a reduced number of requests. 这样,所有集群将仅接收数量减少的请求。

Another app, on another server, each X seconds will make a request to all the clusters in order to check if someone is failed, removing it from the queue (this queue will be on redis?) 在另一台服务器上的另一个应用程序,每隔X秒将向所有集群发出请求,以检查某人是否失败,将其从队列中删除(此队列将在Redis上?)

All the clusters normally will run an instance of my web framework. 通常,所有集群都将运行我的Web框架的实例。

Is for you this a good system of load balancing, or have I completely misunderstanding how works? 这对您来说是一个好的负载平衡系统,还是我完全误解了它的工作原理? Thank you guys 感谢大伙们

edit: that's what I mean (only an example): 编辑:这就是我的意思(仅作为示例):

var http = require('http'),
     https = require('https'),
    httpProxy = require('http-proxy'),
    proxy = httpProxy.createProxyServer({}),

     clusters = [
        {
            id: 1,
            host: "localhost",
            port: 8080,
            dead : false,
            deadTime : undefined
        },      
        {
            id: 2,
            host: "localhost",
            port: 8081,
            dead : false,
            deadTime : undefined
        }
     ];

http.createServer(function(req, res) {
    var target = getAvailableCluster();

    if (target != -1) {
        proxy.web(req, res, { target: 'http://' + target.host + ':' + target.port });

        res.setTimeout(1e3 * 20, function() {
            target.dead = true;
            target.deadTime = new Date().getTime();
            console.log("Cluster " + target.id + " is dead");
        });
    }   
}).listen(80, function() {
    console.log('Proxy listening on port 80..');
});

proxy.on('error', function (error, req, res) {
    var json;
    console.log('proxy error', error);

    if (!res.headersSent)
        res.writeHead(500, { 'content-type': 'application/json' });

    json = { error: 'proxy_error', reason: error.message };
    res.end(JSON.stringify(json));
});

setInterval(function() {
    var cluster,
        currentTime = new Date().getTime();

    for (var i=0; i<clusters.length; i++) {
        cluster = clusters[i];

        if (cluster.dead && (currentTime - cluster.deadTime) > 1000) {
            cluster.dead = false;
            console.log("Cluster " + cluster.id + " is now alive");
        }
    }   
}, 5000);

function getAvailableCluster() {
    var cluster;

    for (var i=0; i<clusters.length; i++) {
        cluster = clusters.shift();
        clusters.push(cluster);

        if (!cluster.dead)      
            return cluster;
    }

    return -1;
}

Why are you re-inventing the wheel? 你为什么要重新发明轮子? There is hipache a reverse proxy/ load balancer which has all the features you need as far as I can see. 据我所知,有一个可逆的代理/负载均衡器,它具有您需要的所有功能。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM