简体   繁体   English

负载平衡两次运行的tomcat应用程序

[英]Load balancing a tomcat application running on two wars

My application (deployed in tomcat) got two wars, a client (say A) and a server (say B). 我的应用程序(部署在tomcat中)经历了两次大战,一个客户端(例如A)和一个服务器(例如B)。 Both of them are deployed in same jvm, and they commnicate via web service. 两者都部署在同一jvm中,并且它们通过Web服务进行通信。 Now in order to make the application scalable, I want this to be clustered and deployed in multiple nodes. 现在,为了使应用程序具有可伸缩性,我希望将其集群化并部署在多个节点中。 Following is the load balancer configuration in apache server. 以下是apache服务器中的负载均衡器配置。

<Proxy balancer://mycluster stickysession=JSESSIONID>
BalancerMember ajp://127.0.0.1:8009 min=10 max=100 route=jvm1 loadfactor=1
BalancerMember ajp://127.0.0.1:8019 min=20 max=200 route=jvm2 loadfactor=1
</Proxy>

ProxyPass /A balancer://mycluster/A
ProxyPass /B balancer://mycluster/B

In my client application, server url is provided like below 在我的客户端应用程序中,服务器网址如下所示

server.url=http://localhost/B/myservice/

My intention is that any request reaching web app A on a node should get processed in web app B on same node. 我的意图是,到达节点上Web应用程序A的任何请求都应在同一节点上的Web应用程序B中得到处理。 But with the current configuration,its not giving intended result. 但是使用当前配置,它并没有给出预期的结果。 Request processed in web app A on jvm1 goes to web app B on jvm2 and vice versa. 在jvm1上的Web应用程序A中处理的请求转到在jvm2上的Web应用程序B,反之亦然。 Please let me know what I'm missing here and how could I get rid of the problem 请让我知道我在这里缺少什么,以及如何摆脱这个问题

The behaviour you observe seems reasonable: You send a request to your Apache load balancer, and it gets routed to one of the nodes. 您观察到的行为似乎是合理的:您向Apache负载均衡器发送了一个请求,该请求被路由到其中一个节点。 If I understand your scenario right, you want to force the request (initiated by your web app) to be routed to the correct node. 如果我了解您的情况正确,那么您想强制将请求(由您的Web应用程序发起)路由到正确的节点。 I can think of two ways to achieve this: 我可以想到两种方法来实现此目的:

  1. I suppose the initial request reaching web app A comes from a user owning a session. 我想到达Web应用程序A的初始请求来自拥有会话的用户。 If you have configured sticky sessions in Tomcat, you might reuse the user's session cookie and send it along with your web service request. 如果您已在Tomcat中配置了粘性会话,则可以重用用户的会话cookie并将其与Web服务请求一起发送。 This way, the load balancer will decide that the request be routed to the same node as the original request which brought you the cookie. 这样,负载均衡器将决定将请求路由到与为您带来Cookie的原始请求相同的节点。 Yet it might not be feasible to access the cookie from where you call your web service. 但是,从调用Web服务的位置访问cookie可能并不可行。
  2. It is not quite the load balancer's job to process your internal requests. 处理您的内部请求并不是负载均衡器的工作。 So why use it at all? 那么为什么要使用它呢? You might add a regular HTTP connector to both your Tomcat configurations and use them instead for web service requests. 您可以在两个Tomcat配置中都添加常规HTTP连接器,然后将其用于Web服务请求。 Thus you could circumvent load balancing, which in this case only adds unnecessary latency and overhead to your communications. 因此,您可以规避负载平衡,在这种情况下,这只会增加不必要的延迟和通信开销。 Downside is: you'd probably need to hard-code the IPs to call. 缺点是:您可能需要对要调用的IP进行硬编码。

BTW: Your configuration looks as if both nodes and the load balancer run on a single machine. 顺便说一句:您的配置看起来好像两个节点负载平衡器都在一台计算机上运行。 Sure about that? 可以吗

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM