简体   繁体   中英

Socket.io Websockets on a TCP configured Amazon Elastic Load Balancer

I'm planning to set up a group of NodeJS application servers running Socket.io on EC2, and I'd like to use the Elastic Load Balancer to spread load between them. I know ELB doesn't support Websockets out of the box, but I can use the setup described here in Scenario 2 .

As described in the blog post , though, I notice that this setup offers no session affinity or source IP info:

We can not have Session Affinity nor X-Forward headers with this setup because ELB is not parsing the HTTP messages, so its impossible to match the cookies to ensure Session Affinity nor Inject special X-Forward headers.

Will Socket.io still work under these circumstances? Or is there another way to have a set of Socket.io app servers behind a load balancer with SSL?

EDIT: Tim Caswell talks about doing this already here . Are there any posts explaining how to set this up? Again there's no session stickiness here, but things seem to be working fine.

As an aside, are sticky sessions actually necessary with websockets? Does information travel as new and separate requests or is there only one request + connection that all the information moves along?

Socket.io does not work out of the box even with a TCP ELB because it makes two HTTP requests before upgrading the connection to websockets.

The first connection is used to establish protocol, since socket.io supports more than just websockets.

GET /socket.io/1/?t=1360136617252 HTTP/1.1
User-Agent: node-XMLHttpRequest
Accept: */*
Host: localhost:9999
Connection: keep-alive

HTTP/1.1 200 OK
Content-Type: text/plain
Date: Wed, 06 Feb 2013 07:43:37 GMT
Connection: keep-alive
Transfer-Encoding: chunked

47
xX_HbcG1DN_nufWddblv:60:60:websocket,htmlfile,xhr-polling,jsonp-polling
0

The second request is used to actually upgrade the connection:

GET /socket.io/1/websocket/xX_HbcG1DN_nufWddblv HTTP/1.1
Connection: Upgrade
Upgrade: websocket
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: MTMtMTM2MDEzNjYxNzMxOA==
Host: localhost:9999

HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: 249I3zzVp0SzEn0Te2RLp0iS/z0=

You can see in the above example that xX_HbcG1DN_nufWddblv is a shared key between requests. This is the problem. ELBs do round-robin routing, meaning the upgrade request hits a server than did not participate in the initial negotiation. As such, the server has no idea who the client is.

In-memory stateful data is the enemy of load-balancing. Thankfully, socket.io supports using Redis to store the data instead. If you share your redis connection with multiple servers, they essentially share the sessions of all clients.

See the socket.io wiki page for details on setting up Redis.

You can now use the new application load balancer recently launched by AWS.

Just replace the ELB(now called Classic load balancer) with the ALB (Application load balancer) and enable sticky sessions.

ALB supports Web sockets. This should do the trick.

https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/

http://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html

As I mentioned in the post, we only use ELB to ssl terminate and load-balance across a cluster of http-proxy servers that do support websockets. ELB doesn't talk to the websocket servers directly. The HTTP proxy cluster handles looking up the right socket.io server to connect to ensuring session stickiness.

When you run a server in a cloud that has a load-balancer/reverse proxy, routers etc, you need to configure it to work properly, especially when you scale the server to use multiple instances.

One of the constraints Socket.io, SockJS and similar libraries have is that they need to continuously talk to the same instance of the server. They work perfectly well when there is only 1 instance of the server.

When you scale your app in a cloud environment, the load balancer (Nginx in the case of Cloud Foundry) will take over, and the requests will be sent to different instances causing Socket.io to break.

To help in such situations, load balancers have a feature called 'sticky sessions' aka 'session affinity'. The main idea is that if this property is set, then after the first load-balanced request, all the following requests will go to the same server instance.

In Cloud Foundry, cookie-based sticky sessions are enabled for apps that set the cookie jsessionid.

Note: jsessionid is the cookie name commonly used to track sessions in Java/Spring applications. Cloud Foundry is simply adopting that as the sticky session cookie for all frameworks.

So, all the apps need to do is to set a cookie with the name jsessionid to make socket.io work.

app.use(cookieParser); app.use(express.session({store:sessionStore, key:'jsessionid', secret:'your secret here'}));

So these are the steps:

Express sets a session cookie with name jsessionid. When socket.io connects, it uses that same cookie and hits the load balancer The load balancer always routes it to the same server that the cookie was set in. If you are using Application Load Balancer then Sticky session settings is at target group level

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM