[英]Scaling socket.io with HAProxy
So far I have had a single node.js app. 到目前为止,我有一个node.js应用程序。 running socket.io.
运行socket.io。 As number of users grows, it reaches 100% CPU most of the day, so I decided to split users to multiple node.js processes.
随着用户数量的增长,它在一天中的大部分时间达到100%CPU,因此我决定将用户分成多个node.js进程。 I have split my node.js application logic to allow sharding of users on different subdomains.
我已经拆分了我的node.js应用程序逻辑,以允许对不同子域上的用户进行分片。 I also extracted session code into token passing via URL, so cookies are not important.
我还将会话代码提取到通过URL传递的令牌中,因此cookie并不重要。
I'd like to use at least 4 cores of my 8-core machine, so I want to run multiple node.js processes, each serving the app on subdomain. 我想使用我的8核机器的至少4个核心,所以我想运行多个node.js进程,每个进程在子域上为app提供服务。 In order for all node.js's to be accessible via port 80, I decided to use HAProxy.
为了能够通过端口80访问所有node.js,我决定使用HAProxy。 Setup looks like this:
安装程序如下所示:
domain.com -> haproxy -> node on 127.0.0.1:5000
sub1.domain.com -> haproxy -> node on 127.0.0.1:5001
sub2.domain.com -> haproxy -> node on 127.0.0.1:5002
sub3.domain.com -> haproxy -> node on 127.0.0.1:5003
Now everything works, but reqular part of the application (not using socket.io) is very slow. 现在一切正常,但是应用程序的一部分(不使用socket.io)非常慢。 It's written using Express.js and it works fast when I open the page directly (ie not through HAProxy).
它是使用Express.js编写的,当我直接打开页面时(即不通过HAProxy)它可以很快地工作。 Also, connecting to socket.io works fast with XHR transport, but for Websocket transport it also takes a long time to establish connection.
此外,连接到socket.io可以快速使用XHR传输,但对于Websocket传输,它还需要很长时间才能建立连接。 Once connection is established, it works well and fast.
一旦建立连接,它就能很好地运行。
I have never used HAProxy before, so I probably misconfigured something. 我之前从未使用过HAProxy,所以我可能错误地配置了一些东西。 Here's my HAProxy config:
这是我的HAProxy配置:
global
maxconn 50000
daemon
defaults
mode http
retries 1
contimeout 8000
clitimeout 120000
srvtimeout 120000
frontend http-in
bind *:80
acl is_l1 hdr_end(host) -i sub1.domain.com
acl is_l2 hdr_end(host) -i sub2.domain.com
acl is_l3 hdr_end(host) -i sub3.domain.com
acl is_l0 hdr_end(host) -i domain.com
use_backend b1 if is_l1
use_backend b2 if is_l2
use_backend b3 if is_l3
use_backend b0 if is_l0
default_backend b0
backend b0
balance source
option forwardfor except 127.0.0.1 # stunnel already adds the header
server s1 127.0.0.1:5000
backend b1
balance source
option forwardfor except 127.0.0.1 # stunnel already adds the header
server s2 127.0.0.1:5001
backend b2
balance source
option forwardfor except 127.0.0.1 # stunnel already adds the header
server s2 127.0.0.1:5002
backend b3
balance source
option forwardfor except 127.0.0.1 # stunnel already adds the header
server s2 127.0.0.1:5003
I figured it out. 我想到了。 I failed to find this in docs, but global maxconn setting does NOT apply to frontend.
我没能在文档中找到它,但全局maxconn设置不适用于前端。 Frontend has default of 2000 concurrent connections and everything beyond was queued.
前端默认有2000个并发连接,超出的所有内容都排队等候。 Since I have long-lived socket.io connections this created problems.
由于我有长期的socket.io连接,这就产生了问题。
The solution is to explicitly set maxconn in frontend section. 解决方案是在前端部分显式设置maxconn。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.