简体   繁体   English

基于Cookie的WebSockets负载均衡?

[英]Cookie based Load Balancing for WebSockets?

My situation is that we currently write an online application which uses Node.js on server side with WebSocket listener. 我的情况是我们目前正在编写一个在线应用程序,它在服务器端使用Node.js和WebSocket监听器。 We have two different parts: one serves pages and uses node.js and express+ejs, another is a completely different app which only includes socket.io library for websockets. 我们有两个不同的部分:一个服务页面,使用node.js和express + ejs,另一个是完全不同的应用程序,只包含用于websockets的socket.io库。 So here we come to this issue of scalability of websockets part. 所以这里我们来讨论websockets部分的可伸缩性问题。

One solution we've found is to use redis and share sockets information among servers, but due to architecture it will require sharing of loads of other information, which is going to create huge overhead on servers. 我们发现的一个解决方案是在服务器之间使用redis和共享套接字信息,但由于体系结构,它将需要共享其他信息的负载,这将在服务器上产生巨大的开销。

After this intro, my question is - is it possible to use cookie based load balancing for websockets? 在这个介绍之后,我的问题是 - 是否可以为websockets使用基于cookie的负载平衡? So that lets say every connection from user with cookie server=server1 will always be forwarded to server1 and every connection with cookie server=server2 will be fw to server2 and connection with no such cookie will be fw to least busiest server. 因此,假设来自用户的每个连接都使用cookie server = server1将始终转发到server1,并且每个与cookie server = server2的连接将是fw到server2,并且没有这样的cookie的连接将是fw到最不忙的服务器。

UPDATE: As one 'answer' says -- yes, i know this exists. 更新:正如一个'答案'所说 - 是的,我知道这存在。 Just did not remember that name is sticky session. 只是不记得那个名字是粘性会话。 But the question is -- will that work for websockets? 但问题是 - 这对websockets有用吗? Are there any possible complications? 有任何可能的并发症吗?

We had a similar problem show up in our Node.js production stack. 我们的Node.js生产堆栈中出现了类似的问题。 We have two servers using WebSockets which work for normal use cases, but occasionally the load balancer would bounce these connection between the two servers which would cause problems. 我们有两个使用WebSockets的服务器,它们适用于正常使用情况,但偶尔负载均衡器会在两个服务器之间弹出这些连接,这会导致问题。 (We have Session code in place on the backend that should have fixed it, but did not properly handle it.) (我们在后端有适当的会话代码应该修复它,但没有正确处理它。)

We tried enabling Sticky Session on the Barracuda load balancer in front of these servers but found that it would block WebSocket traffic due to how it operated. 我们尝试在这些服务器前面的Barracuda负载均衡器上启用Sticky Session,但发现由于它的运行方式会阻止WebSocket流量。 I have not researched exactly why, as little information is available online, but it appears that this is due to how the balancer strips off the headers for an HTTP request, grabs the cookie, and forwards the request to the correct backend server. 我没有仔细研究过为什么,因为在线提供的信息很少,但似乎这是由于平衡器如何剥离HTTP请求的标头,抓取cookie并将请求转发到正确的后端服务器。 Since WebSockets starts off as HTTP but then upgrades, the load balancer did not notice the difference in the connection and would try to do the same HTTP processing. 由于WebSockets从HTTP开始,然后升级,负载均衡器没有注意到连接的差异,并尝试进行相同的HTTP处理。 This would cause the WebSocket connection to fail, disconnecting the user. 这将导致WebSocket连接失败,从而断开用户连接。

The following is what we currently have in place which is working very well. 以下是我们目前所处的工作非常好。 We still use the Barracuda load balancers in front of our backend servers, but we do not have Sticky Sessions enabled on the load balancers. 我们仍然在后端服务器前使用Barracuda负载平衡器,但我们没有在负载平​​衡器上启用Sticky Sessions。 On our backend servers, in front of our application server is HAProxy which does properly support WebSockets, and can provide Sticky Sessions in a 'roundabout' way. 在我们的后端服务器上,在我们的应用程序服务器前面是HAProxy,它可以正确地支持WebSockets,并且可以以“环形”方式提供Sticky Sessions。


Request Flow List 请求流列表

  1. Incoming Client request hits primary Barracuda Load Balancer 传入客户端请求命中主要梭子鱼负载均衡器
  2. Load Balancer forwards to either of the active backend servers Load Balancer转发到任一活动后端服务器
  3. HAProxy receives the request and checks for the new 'sticky cookie' HAProxy接收请求并检查新的“粘性cookie”
  4. Based on the cookie, HAProxy forwards to the correct backend application server 基于cookie,HAProxy转发到正确的后端应用程序服务器

Request Flow Diagram 请求流程图

 WebSocket Request  /--> Barracuda 1 -->\   /--> Host 1 -->\   /--> App 1
------------------->                     -->                -->
                    \--> Barracuda 2 -->/   \--> Host 2 -->/   \--> App 1

When the arrows come back to one request, that means the request can flow to either point in the flow. 当箭头返回到一个请求时,这意味着请求可以流向流中的任一点。


HAProxy Configuration Details HAProxy配置详细信息

backend app_1
   cookie ha_app_1 insert
   server host1 10.0.0.101:80011 weight 1 maxconn 1024 cookie host_1 check
   server host2 10.0.0.102:80011 weight 1 maxconn 1024 cookie host_2 check

In the above configuration: 在上面的配置中:

  • cookie ha_app_1 insert is the actual cookie name used cookie ha_app_1 insert是使用的实际cookie名称
  • cookie host_1 check or cookie host_2 check sets the cookie value cookie host_1 checkcookie host_2 check设置cookie值

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM