简体   繁体   English

如何扩展 node.js / socket.io 服务器?

[英]How to scale node.js / socket.io server?

I am currently running a node.js app and am about to introduce socket.io to allow real time updates (chat, in-app notifications, ...).我目前正在运行一个 node.js 应用程序,并且即将引入 socket.io 以允许实时更新(聊天、应用程序内通知等)。 At the moment, I am running the smallest available setup from DigitalOcean (1 vCPU, 1 GB RAM) for my node.js server.目前,我正在为我的 node.js 服务器运行来自 DigitalOcean 的最小可用设置(1 vCPU,1 GB RAM)。 I stress-tested the node.js app connecting to socket.io using Artillery:我使用 Artillery 对连接到 socket.io 的 node.js 应用程序进行了压力测试:

config:
  target: "https://my.server.com"
  socketio:
    - transports: ["websocket"] // optional, same results if I remove this
  phases:
    - duration: 600
      arrivalRate: 20
scenarios:
- name: "A user that just connects"
  weight: 90
  engine: "socketio"
  flow:
    - get:
        url: "/"
    - think: 600

It can handle a couple hundred concurrent connections.它可以处理几百个并发连接。 After that, I start getting the following errors:之后,我开始收到以下错误:

Errors:
  ECONNRESET: 1
  Error: xhr poll error: 12

When I resize my DigitalOcean droplet to 8 vCPU's and 32 GB RAM, I can get upwards of 1700 concurrent connections.当我将 DigitalOcean Droplet 调整为 8 个 vCPU 和 32 GB RAM 时,我可以获得多达 1700 个并发连接。 No matter how much more I resize, it always sticks around that number.无论我调整大小多少,它始终保持在该数字附近。

My first question: is this normal behavior?我的第一个问题:这是正常行为吗? Is there any way to increase this number per droplet, so I can have more concurrent connections on a single node instance?有什么办法可以增加每个 droplet 的数量,以便我可以在单个节点实例上有更多的并发连接? Here is my configuration:这是我的配置:

sysctl -p sysctl -p

fs.file-max = 2097152
vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2
net.ipv4.tcp_synack_retries = 2
net.ipv4.ip_local_port_range = 2000 65535
net.ipv4.tcp_rfc1337 = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15
net.core.rmem_default = 31457280
net.core.rmem_max = 12582912
net.core.wmem_default = 31457280
net.core.wmem_max = 12582912
net.core.somaxconn = 4096
net.core.netdev_max_backlog = 65536
net.core.optmem_max = 25165824
net.ipv4.tcp_mem = 65536 131072 262144
net.ipv4.udp_mem = 65536 131072 262144
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.udp_rmem_min = 16384
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.udp_wmem_min = 16384
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_reuse = 1

ulimit超限

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 3838
max locked memory       (kbytes, -l) 16384
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 10000000
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

nginx.conf配置文件

user www-data;
worker_processes auto;
worker_rlimit_nofile 1000000;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
        multi_accept on;
        use epoll;
        worker_connections 1000000;
}

http {

        ##
        # Basic Settings
        ##

        client_max_body_size 50M;
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 120;
        keepalive_requests 10000;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;
        ##
        # Gzip Settings
        ##

        gzip on;

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

Another question: I am thinking about scaling horizontally and spinning up more droplets.另一个问题:我正在考虑水平缩放并旋转更多液滴。 Let's say 4 droplets to proxy all connections to.假设有 4 个 droplet 来代理所有连接。 How would this be set up in practice?这在实践中如何设置? I would use Redis to emit through socket.io to all connected clients.我会使用 Redis 通过 socket.io 向所有连接的客户端发送数据。 Do I use 4 droplets with the same configuration?我是否使用 4 个具有相同配置的液滴? Do I run the same stuff on all 4 of them?我是否在所有 4 个上运行相同的东西? For instance, should I upload the same server.js app on all 4 droplets?例如,我应该在所有 4 个 Droplet 上上传相同的 server.js 应用程序吗? Any advice is welcome.欢迎任何建议。

I can't really answer your first question, however I can try my best on your second.你的第一个问题我真的无法回答,但是我可以尽力回答你的第二个问题。

If you're setting up load balancing, you run the same server.js app on each droplet and have one handle traffic.如果您正在设置负载平衡,则您在每个 Droplet 上运行相同的 server.js 应用程序并有一个处理流量。 I don't know much about Redis but found this: https://redis.io/topics/cluster-tutorial I hope this helped.我对 Redis 不太了解,但发现了这个: https : //redis.io/topics/cluster-tutorial我希望这有帮助。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM