简体   繁体   中英

Apache with JBOSS using AJP (mod_jk) giving spikes in thread count

We used Apache with JBOSS for hosting our Application, but we found some issues related to thread handling of mod_jk.

Our website comes under low traffic websites and has maximum 200-300 concurrent users during our website's peak activity time. As the traffic grows (not in terms of concurrent users, but in terms of cumulative requests which came to our server), the server stopped serving requests for long, although it didn't crash but could not serve the request till 20 mins. The JBOSS server console showed that 350 thread were busy on both servers although there was enough free memory say, more than 1-1.5 GB (2 servers for JBOSS were used which were 64 bits, 4 GB RAM allocated for JBOSS)

In order to check the problem we were using JBOSS and Apache Web Consoles, and we were seeing that the thread were showing in S state for as long as minutes although our pages take around 4-5 seconds to be served.

We took the thread dump and found that the threads were mostly in WAITING state which means that they were waiting indefinitely. These threads were not of our Application Classes but of AJP 8009 port.

Could somebody help me in this, as somebody else might also got this issue and solved it somehow. In case any more information is required then let me know.

Also is mod_proxy better than using mod_jk, or there are some other problems with mod_proxy which can be fatal for me if I switch to mod__proxy?

The versions I used are as follows:

Apache 2.0.52
JBOSS: 4.2.2
MOD_JK: 1.2.20
JDK: 1.6
Operating System: RHEL 4

Thanks for the help.

Guys!!!! We finally found the workaround with the configuration mentioned above. It is use of APR and is mentioned here: http://community.jboss.org/thread/153737 . Its issue as correctly mentioned by many people in answers below ie connector issue. Earlier we made temporary workaround by configuring hibernate and increasing response time. The full fix is APR.

We are experiencing similar issues. We are still working on solutions, but it looks like alot of answers can be found here:

http://www.jboss.org/community/wiki/OptimalModjk12Configuration

Good luck!

Deploy the Apache native APR under jboss/bin/native.

Edit your jboss run.sh to make sure it is looking for the native libs in the right folder.

This will force jboss to use native AJP connector trheads rather than the default pure-java ones.

You should also take a look at the JBoss Jira issue, titled "AJP Connector Threads Hung in CLOSE_WAIT Status":

https://jira.jboss.org/jira/browse/JBPAPP-366

We were having this issue in a Jboss 5 environment. The cause was a web service that took longer to respond than Jboss/Tomcat allowed. This would cause the AJP thread pool to eventually exhaust its available threads. It would then stop responding. Our solution was to adjust the web service to use a Request/Acknowledge pattern rather than a Request/Respond pattern. This allowed the web service to respond within the timeout period every time. Granted this doesn't solve the underlying configuration issue of Jboss, but it was easier for us to do in our context than tuning jboss.

What we did for sorting this issue out is as follows:

 <property name="hibernate.cache.use_second_level_cache">false</property>


 <property name="hibernate.search.default.directory_provider">org.hibernate.search.store.FSDirectoryProvider</property>
    <property name="hibernate.search.Rules.directory_provider">
        org.hibernate.search.store.RAMDirectoryProvider 
    </property>

    <property name="hibernate.search.default.indexBase">/usr/local/lucene/indexes</property>

    <property name="hibernate.search.default.indexwriter.batch.max_merge_docs">1000</property>
    <property name="hibernate.search.default.indexwriter.transaction.max_merge_docs">10</property>

    <property name="hibernate.search.default.indexwriter.batch.merge_factor">20</property>
    <property name="hibernate.search.default.indexwriter.transaction.merge_factor">10</property>

 <property name ="hibernate.search.reader.strategy">not-shared</property>   
 <property name ="hibernate.search.worker.execution">async</property>   
 <property name ="hibernate.search.worker.thread_pool.size">100</property>  
 <property name ="hibernate.search.worker.buffer_queue.max">300</property>  

 <property name ="hibernate.search.default.optimizer.operation_limit.max">1000</property>   
 <property name ="hibernate.search.default.optimizer.transaction_limit.max">100</property>  

 <property name ="hibernate.search.indexing_strategy">manual</property> 

Above parameters ensured that the worker threads are not blocked by lucene and hibernate search. Default optimizer of hibernate made our life easy, thus I consider this setting very important.

Also removed the C3P0 connection pooling and used inbuilt JDBC connection pooling, thus we commented below section.

 <!--For JDBC connection pool (use the built-in)-->


 <property   name="connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
    <!-- DEPRECATED very expensive property name="c3p0.validate>-->
    <!-- seconds -->

After doing all this, we were able to reduce considerably the time which an AJP thread was taking to serve a request and threads started coming to R state after serving the request ie in S state.

There is a bug in tomcat 6 that was filed recently. It's in regards to the HTTP connector but the symptoms sound the same.

https://issues.apache.org/bugzilla/show_bug.cgi?id=48843#c1

There is a bug related to AJP connector executor leaking threads and the solution is explained here Jboss AJP thread pool not released idle threads . In summary, AJP thread-pool connections by default have no timeout and will persist permanently once established. Hope this helps,

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM