简体   繁体   中英

Scaling socket.io with HAProxy

So far I have had a single node.js app. running socket.io. As number of users grows, it reaches 100% CPU most of the day, so I decided to split users to multiple node.js processes. I have split my node.js application logic to allow sharding of users on different subdomains. I also extracted session code into token passing via URL, so cookies are not important.

I'd like to use at least 4 cores of my 8-core machine, so I want to run multiple node.js processes, each serving the app on subdomain. In order for all node.js's to be accessible via port 80, I decided to use HAProxy. Setup looks like this:

     domain.com -> haproxy -> node on 127.0.0.1:5000
sub1.domain.com -> haproxy -> node on 127.0.0.1:5001
sub2.domain.com -> haproxy -> node on 127.0.0.1:5002
sub3.domain.com -> haproxy -> node on 127.0.0.1:5003

Now everything works, but reqular part of the application (not using socket.io) is very slow. It's written using Express.js and it works fast when I open the page directly (ie not through HAProxy). Also, connecting to socket.io works fast with XHR transport, but for Websocket transport it also takes a long time to establish connection. Once connection is established, it works well and fast.

I have never used HAProxy before, so I probably misconfigured something. Here's my HAProxy config:

global
    maxconn 50000
    daemon

defaults
    mode http
    retries 1
    contimeout 8000
    clitimeout 120000
    srvtimeout 120000

frontend http-in
    bind *:80
    acl is_l1 hdr_end(host) -i sub1.domain.com
    acl is_l2 hdr_end(host) -i sub2.domain.com
    acl is_l3 hdr_end(host) -i sub3.domain.com
    acl is_l0 hdr_end(host) -i domain.com
    use_backend b1 if is_l1
    use_backend b2 if is_l2
    use_backend b3 if is_l3
    use_backend b0 if is_l0
    default_backend b0

backend b0
    balance source
    option forwardfor except 127.0.0.1  # stunnel already adds the header
    server s1 127.0.0.1:5000

backend b1
    balance source
    option forwardfor except 127.0.0.1  # stunnel already adds the header
    server s2 127.0.0.1:5001

backend b2
    balance source
    option forwardfor except 127.0.0.1  # stunnel already adds the header
    server s2 127.0.0.1:5002

backend b3
    balance source
    option forwardfor except 127.0.0.1  # stunnel already adds the header
    server s2 127.0.0.1:5003

I figured it out. I failed to find this in docs, but global maxconn setting does NOT apply to frontend. Frontend has default of 2000 concurrent connections and everything beyond was queued. Since I have long-lived socket.io connections this created problems.

The solution is to explicitly set maxconn in frontend section.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM