简体   繁体   English

Apache与JBOSS使用AJP(mod_jk)给出了线程数的峰值

[英]Apache with JBOSS using AJP (mod_jk) giving spikes in thread count

We used Apache with JBOSS for hosting our Application, but we found some issues related to thread handling of mod_jk. 我们使用Apache和JBOSS来托管我们的应用程序,但是我们发现了一些与mod_jk的线程处理有关的问题。

Our website comes under low traffic websites and has maximum 200-300 concurrent users during our website's peak activity time. 我们的网站属于流量较低的网站,在我们网站的高峰活动时间内最多有200-300个并发用户。 As the traffic grows (not in terms of concurrent users, but in terms of cumulative requests which came to our server), the server stopped serving requests for long, although it didn't crash but could not serve the request till 20 mins. 随着流量的增长(不是就并发用户而言,而是就来到我们服务器的累积请求而言),服务器停止长时间处理请求,尽管它没有崩溃,但是在20分钟之前无法提供请求。 The JBOSS server console showed that 350 thread were busy on both servers although there was enough free memory say, more than 1-1.5 GB (2 servers for JBOSS were used which were 64 bits, 4 GB RAM allocated for JBOSS) JBOSS服务器控制台显示350个线程在两个服务器上都忙,尽管有足够的可用内存,超过1-1.5 GB(使用JBOSS的2个服务器为64位,为JBOSS分配4 GB RAM)

In order to check the problem we were using JBOSS and Apache Web Consoles, and we were seeing that the thread were showing in S state for as long as minutes although our pages take around 4-5 seconds to be served. 为了检查我们使用JBOSS和Apache Web控制台的问题,我们看到线程在S状态下显示的时间长达几分钟,尽管我们的页面需要大约4-5秒才能完成。

We took the thread dump and found that the threads were mostly in WAITING state which means that they were waiting indefinitely. 我们接受了线程转储,发现线程大多处于WAITING状态,这意味着它们无限期地等待。 These threads were not of our Application Classes but of AJP 8009 port. 这些线程不是我们的应用程序类,而是AJP 8009端口。

Could somebody help me in this, as somebody else might also got this issue and solved it somehow. 有人可以帮我这个,因为其他人也可能得到这个问题并以某种方式解决了它。 In case any more information is required then let me know. 如果需要更多信息,请告诉我。

Also is mod_proxy better than using mod_jk, or there are some other problems with mod_proxy which can be fatal for me if I switch to mod__proxy? 另外mod_proxy比使用mod_jk更好,或者mod_proxy有一些其他问题,如果我切换到mod__proxy对我来说可能是致命的?

The versions I used are as follows: 我使用的版本如下:

Apache 2.0.52
JBOSS: 4.2.2
MOD_JK: 1.2.20
JDK: 1.6
Operating System: RHEL 4

Thanks for the help. 谢谢您的帮助。

Guys!!!! 专家!!!! We finally found the workaround with the configuration mentioned above. 我们终于找到了上面提到的配置的解决方法。 It is use of APR and is mentioned here: http://community.jboss.org/thread/153737 . 它是APR的使用,在这里提到: http//community.jboss.org/thread/153737 Its issue as correctly mentioned by many people in answers below ie connector issue. 正如许多人在下面的答案中正确提到的问题,即连接器问题。 Earlier we made temporary workaround by configuring hibernate and increasing response time. 之前我们通过配置hibernate和增加响应时间来进行临时解决。 The full fix is APR. 完整的解决方案是APR。

We are experiencing similar issues. 我们遇到了类似的问题。 We are still working on solutions, but it looks like alot of answers can be found here: 我们仍在研究解决方案,但看起来很多答案可以在这里找到:

http://www.jboss.org/community/wiki/OptimalModjk12Configuration http://www.jboss.org/community/wiki/OptimalModjk12Configuration

Good luck! 祝好运!

Deploy the Apache native APR under jboss/bin/native. 在jboss / bin / native下部署Apache本机APR。

Edit your jboss run.sh to make sure it is looking for the native libs in the right folder. 编辑你的jboss run.sh以确保它在正确的文件夹中查找本机库。

This will force jboss to use native AJP connector trheads rather than the default pure-java ones. 这将强制jboss使用本机AJP连接器trhead而不是默认的纯java连接器。

You should also take a look at the JBoss Jira issue, titled "AJP Connector Threads Hung in CLOSE_WAIT Status": 您还应该看看JBoss Jira问题,标题为“在CLOSE_WAIT状态中挂起的AJP连接器线程”:

https://jira.jboss.org/jira/browse/JBPAPP-366 https://jira.jboss.org/jira/browse/JBPAPP-366

We were having this issue in a Jboss 5 environment. 我们在Jboss 5环境中遇到了这个问题。 The cause was a web service that took longer to respond than Jboss/Tomcat allowed. 原因是Web服务的响应时间比Jboss / Tomcat允许的时间长。 This would cause the AJP thread pool to eventually exhaust its available threads. 这将导致AJP线程池最终耗尽其可用线程。 It would then stop responding. 然后它会停止响应。 Our solution was to adjust the web service to use a Request/Acknowledge pattern rather than a Request/Respond pattern. 我们的解决方案是调整Web服务以使用Request / Acknowledge模式而不是Request / Respond模式。 This allowed the web service to respond within the timeout period every time. 这允许Web服务每次在超时期限内响应。 Granted this doesn't solve the underlying configuration issue of Jboss, but it was easier for us to do in our context than tuning jboss. 虽然这并没有解决Jboss的底层配置问题,但是我们在上下文中比调整jboss更容易。

What we did for sorting this issue out is as follows: 我们为解决这个问题所做的工作如下:

 <property name="hibernate.cache.use_second_level_cache">false</property>


 <property name="hibernate.search.default.directory_provider">org.hibernate.search.store.FSDirectoryProvider</property>
    <property name="hibernate.search.Rules.directory_provider">
        org.hibernate.search.store.RAMDirectoryProvider 
    </property>

    <property name="hibernate.search.default.indexBase">/usr/local/lucene/indexes</property>

    <property name="hibernate.search.default.indexwriter.batch.max_merge_docs">1000</property>
    <property name="hibernate.search.default.indexwriter.transaction.max_merge_docs">10</property>

    <property name="hibernate.search.default.indexwriter.batch.merge_factor">20</property>
    <property name="hibernate.search.default.indexwriter.transaction.merge_factor">10</property>

 <property name ="hibernate.search.reader.strategy">not-shared</property>   
 <property name ="hibernate.search.worker.execution">async</property>   
 <property name ="hibernate.search.worker.thread_pool.size">100</property>  
 <property name ="hibernate.search.worker.buffer_queue.max">300</property>  

 <property name ="hibernate.search.default.optimizer.operation_limit.max">1000</property>   
 <property name ="hibernate.search.default.optimizer.transaction_limit.max">100</property>  

 <property name ="hibernate.search.indexing_strategy">manual</property> 

Above parameters ensured that the worker threads are not blocked by lucene and hibernate search. 上述参数确保工作线程不会被lucene和hibernate搜索阻止。 Default optimizer of hibernate made our life easy, thus I consider this setting very important. hibernate的默认优化器使我们的生活变得简单,因此我认为这个设置非常重要。

Also removed the C3P0 connection pooling and used inbuilt JDBC connection pooling, thus we commented below section. 还删除了C3P0连接池并使用了内置的JDBC连接池,因此我们在下面的部分中进行了评论。

 <!--For JDBC connection pool (use the built-in)-->


 <property   name="connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
    <!-- DEPRECATED very expensive property name="c3p0.validate>-->
    <!-- seconds -->

After doing all this, we were able to reduce considerably the time which an AJP thread was taking to serve a request and threads started coming to R state after serving the request ie in S state. 完成所有这些之后,我们能够大大减少AJP线程为请求提供服务的时间,并且在服务请求后线程开始进入R状态,即处于S状态。

There is a bug in tomcat 6 that was filed recently. 最近提交的tomcat 6中存在一个错误。 It's in regards to the HTTP connector but the symptoms sound the same. 这是关于HTTP连接器但症状听起来相同。

https://issues.apache.org/bugzilla/show_bug.cgi?id=48843#c1 https://issues.apache.org/bugzilla/show_bug.cgi?id=48843#c1

There is a bug related to AJP connector executor leaking threads and the solution is explained here Jboss AJP thread pool not released idle threads . 有一个与AJP连接器执行程序泄漏线程相关的错误,这里解释了解决方案Jboss AJP线程池未释放空闲线程 In summary, AJP thread-pool connections by default have no timeout and will persist permanently once established. 总之,默认情况下,AJP线程池连接没有超时,并且一旦建立就会永久保留。 Hope this helps, 希望这可以帮助,

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM