简体   繁体   中英

Aws load balancer for Server Sent Events or Websockets

I'm trying to load balance a nodejs Server sent event backend and I need to know if there is a way to distribute the new connections to the instances with the least connected clients. The problem I have is when scaling up, the routing continues sending new connections to the already saturated instance and since the connections are long lived this simply won't work.

What options do I have for horizontal scaling long lived connections?

Since you are using AWS, I'd recommend Elastic Beanstalk for your Node.js application deployment. The official documentation provides good examples, like this one . Note that Beanstalk will automatically create an Elastic Load Balancer for you, which is what you're looking for.

By default, Elastic Beanstalk creates an Application Load Balancer for your environment when you enable load balancing with the Elastic Beanstalk console or the EB CLI. It configures the load balancer to listen for HTTP traffic on port 80 and forward this traffic to instances on the same port.

[...]

Note: Your environment must be in a VPC with subnets in at least two Availability Zones to create an Application Load Balancer. All new AWS accounts include default VPCs that meet this requirement. If your environment is in a VPC with subnets in only one Availability Zone, it defaults to a Classic Load Balancer. If you don't have any subnets, you can't enable load balancing.

Note that the configuration of a proper health check path is key to properly balance requests, as you mentioned in your question.

In a load balanced environment, Elastic Load Balancing sends a request to each instance in an environment every 10 seconds to confirm that instances are healthy. By default, the load balancer is configured to open a TCP connection on port 80. If the instance acknowledges the connection, it is considered healthy.

You can choose to override this setting by specifying an existing resource in your application. If you specify a path, such as /health, the health check URL is set to HTTP:80/health. The health check URL should be set to a path that is always served by your application. If it is set to a static page that is served or cached by the web server in front of your application, health checks will not reveal issues with the application server or web container.

EDIT: If you're looking for sticky sessions, as I described in the comments, follow the steps provided in this guide :

To enable sticky sessions using the console

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/ .

  2. On the navigation pane, under LOAD BALANCING, choose Target Groups.

  3. Select the target group.

  4. On the Description tab, choose Edit attributes.

  5. On the Edit attributes page, do the following:

a. Select Enable load balancer generated cookie stickiness.

b. For Stickiness duration, specify a value between 1 second and 7 days.

c. Choose Save.

It looks like you want a Load Balancer that can provide both "sticky sessions" and use the "least connection" instead of "round-robin" policy. Unfortunately, NGINX cannot provide this.

HAProxy (High Availability Proxy) allows for this:

backend bk_myapp
 cookie MyAPP insert indirect nocache
 balance leastconn
 server srv1 10.0.0.1:80 check cookie srv1
 server srv2 10.0.0.2:80 check cookie srv2

If you need ELB functionality and want to roll it all manually, take a look at this guide .

You might also want to make sure classic AWS ELB "sticky session" configuration or the newer ALB "sticky session" option does not meet your needs. ELB normally sends connection to upstream server with the least "load", and when combining with sticky sessions might be enough.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM