简体   繁体   English

如何在java中为套接字连接创建软件负载均衡器

[英]How to create a software load balancer for socket connections in java

I am planning to create a sofware load balancer that will sit in front of many socket servers in linux. 我打算创建一个软件负载均衡器,它将位于linux中的许多套接字服务器之前。 The clients will connect to the load balancer . 客户端将连接到负载均衡器。 The load balancer will maintain a table of alternative ips and their ports. 负载均衡器将维护一个替代ips及其端口的表。 It will connect the client to the best available ip and port and disconnect itself from the client.Thus it will go out of the scene(will no longer be connected to the client).In this condition only connected devices will be the client and the new socket server but NOT the load balancer. 它将客户端连接到最佳可用的IP和端口,并断开自身与客户端的连接。因此它将离开现场(将不再连接到客户端)。在这种情况下,只有连接的设备将是客户端和新的套接字服务器,但不是负载均衡器。

Example : Cleint ip 10.1.2.3 port 1234  
           load balancer Ip 10.1.2.4 port 1235
           list of socket servers   in Load Balancer: 
           A Ip 10.1.2.4 port 1236 
           B Ip 10.1.2.4 port 1237
           C  Ip 10.1.2.5 port 1238
    Now 
for the 1st request to the load balancer from client, the load balancer  will establish a connection between the client &  server A and disconnect itself from client.
     for the 2nd request to the load balancer from client, the load balancer  will establish a connection between the client &  server B and disconnect itself from client.
     for the 3rd request to the load balancer from client, the load balancer  will establish a connection between the client &  server C and disconnect itself from client.

Any Help on implementing this in Java is greatly appreciated.

This miniature design can be helpful for apps in small devices like mobiles and tabs. 这种微型设计可以帮助手机和标签等小型设备中的应用。 There can be a limit set by the server on how many round trip can a request be allowed from a particular device. 服务器可以设置限制,允许从特定设备允许请求的往返次数。 Off course scaling of the online servers will help reduce the round trip counts . 当然,在线服务器的缩放将有助于减少往返次数。

I would use redis to store the lookup table. 我会使用redis来存储查找表。 Each load-balancer server will lookup in the lookup table in redis for a connection to the most available / most prioritized server . 每个负载均衡器服务器将在redis中查找查找表,以便连接到最可用/最优先的服务器。 This lookup will return a single integer which is an index of the server. 此查找将返回单个整数,该整数是服务器的索引。 Each app in the client will store the server ip with their respective indexes. 客户端中的每个应用程序都将使用各自的索引存储服务器ip。 so this lookup is very fast and less than 30 ms. 所以这个查找速度非常快,不到30毫秒。 At this point the connection will be faster. 此时连接速度会更快。 NO redirect is needed. 不需要重定向。 A fall back is also provided in case there is concurrent connections and the qouta on the desired server is finished by the time the app tries to connect to the looked up server. 如果存在并发连接,并且在应用程序尝试连接到查找服务器时所需服务器上的qouta已完成,则还会提供后退。 In this case it will again look up the most available server , ie, start over recursively until it gets connected successfully or all resourced are finished and the connection-request is marked as dead end. 在这种情况下,它将再次查找最可用的服务器,即,以递归方式重新启动,直到它成功连接或所有资源都已完成并且连接请求被标记为死胡同。

How about reserving the connection for few milliseconds, for each lookup ? 如何为每次查找保留几毫秒的连接? After the expiry of the delay to connect for that lookup, the page-file will be made available to occupy for a new connection. 在连接该查找的延迟到期之后,页面文件将可用于占用新连接。 This will decrease the recursive lookup but also block the connectivity . 这将减少递归查找,但也会阻止连接。 The delay should be sufficient for a connection to establish which may vary. 延迟应该足以建立可能变化的连接。 On the other hand new connections will be blocked during this delay , which can result in a bad user experience. 另一方面,在此延迟期间将阻止新连接,这可能导致糟糕的用户体验。 You need to trade-off between these two: decrease lookup and block connectivity vs never block connectivity and endure recursive lookup which is very fast. 您需要在这两者之间进行权衡:减少查找和阻止连接,从不阻止连接,并且忍受非常快速的递归查找。

The difficulty is disconnecting the device from server side and connecting it with the intended server. 难点在于断开设备与服务器端的连接并将其与目标服务器连接。
There can be a work around : use redirect instead. 可以有一个解决方法:使用重定向代替。

  • Each server is a load balancer and service provider to the client devices 每个服务器都是客户端设备的负载平衡器和服务提供商
  • Each server will always keep track of its open file limits and preserver safe margin from the max openfile limits 每个服务器将始终跟踪其打开文件限制和最大openfile限制的保留安全边界
  • This pool will be used to check if any redirect is needed. 此池将用于检查是否需要任何重定向。
  • When the open file limit reaches the safe limit, any further connection request will return the next available server ip to the client device. 当打开文件限制达到安全限制时,任何进一步的连接请求都会将下一个可用的服务器ip返回给客户端设备。
  • The next nearest available server ip can be maintained in a lookup table in memory. 可以在内存中的查找表中维护下一个最近的可用服务器ip。
  • The device will check if the returned value starts with the redirect ip it will re connect automatically to the received ip.Otherwise it will assume that the device got the service from any of the server successfully. 设备将检查返回的值是否以重定向ip开始,它将自动重新连接到收到的ip。否则,它将假设设备从任何服务器成功获得服务。

Thus we can avoid the open file limit and connection refused error . 因此,我们可以避免打开文件限制和连接拒绝错误。

I do not quite understand the requirement that the load balancer should disconnect from the client. 我不太明白负载均衡器应该与客户端断开连接的要求。 If your sockets are in fact TCP connections, as they appear to be, I do not see how you could offload the connection to a client running somewhere else without some low level hackery. 如果您的套接字实际上是TCP连接,就像它们看起来那样,我看不出如何将连接卸载到运行在其他地方的客户端而没有一些低级别的hackery。 For example, look at ldirectord from linux virtual server . 例如,从linux虚拟服务器查看ldirectord。 This allows fully offloading the connection. 这允许完全卸载连接。

For pure simplicity, I'd just use HAProxy . 为简单起见,我只使用HAProxy Does most of what you want, except for offloading the connection. 除了卸载连接之外,您需要的大部分内容。

Finally, you could also use some kind of round-robin DNS solution. 最后,您还可以使用某种循环DNS解决方案。 That would also offload the connection as you require. 这也可以根据需要卸载连接。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM