简体   繁体   中英

Measuring Request processing time at JBoss server using Mod_JK

My web application architecture is: Apache 2 (as load balancer) + JBoss 3.2.3 + MySQL 5.0.19.

I want to: measure the request processing time (for every individual request) spent on the JBoss server only (ie, excluding time spent on Web and database servers.

I've been researching about how to log request processing time on an application tier only. I found *mod_JK logging*, *Apache's mod_log_config* and Tomcat AccessLogValve as two possible methods.

Using *mod_JK logging*: my understand mod_jk logging provides request processing time for each request and calculate as time difference between time when a request leaves the Apache server and time when the corresponding response received by the Apache server. Please correct me if this not accurate/correct.

Using Apache's mod_log_config model ( http://www.lifeenv.gov.sk/tomcat-docs/jk/config/apache.html ): by adding "%{JK_REQUEST_DURATION}n" in the LogFormat (the JKLogFile construct) construct (see the above link). The "JK_REQUEST_DURATION" capture overall Tomcat processing time from Apache perspective.

The times (in the above cases) includes Tomcat/JBoss + MySQL processing time. It won't help in my case as it includes MySQL processing time- I want to record request processing time on JBoss only. Any suggestions/ideas are appreciated?

Using AccessLogValve : it can log "time taken to process request, in millis" by setting %D in the pattern attribute of the AccessLogValve XML construct. It is not very clear if this

  • Time if this time is the time required by tomcat/JBoss to serve a request (eg, allocate thread worker to handle it)
  • Time taken to process a request and send it to the database server (overall time on Tomcat/JBoss server)
  • Time taken to process a request by Tomcat/JBoss and send a response back to a Web server/client

Any idea/clue?

This is my experience/research I want to share. It would be appreciated if anyone has similar problem/know a way to do it to share their experience/pointers/thoughts where a better solution can be found.

looking forward for your thoughts/suggestions

Why do you want to exclude the database time? Time spent on the database is time your application is waiting, exactly as it could be waiting for other resources eg lucene indexing to finish, a remote http request to complete etc.

If you really want to exclude the db access time you need to instrument your application with timer start/stop instructions. This will definitelly need to go inside your application (either "cleanly" via AOP or manually via start/stop statements in critical points in the app) and cannot simply be a configuration from the outside world (eg an apache module).

So, you'll need to start the timer when you receive your request in the very start of the processing chain (a filter works well here) and stop every time you send a query. Then start again exactly after the query. This of course cannot be 100% complete especially in the case you use a transparent ORM such as hibernate. That's because sometimes you could be executing queries indirectly, ie when traversing a collection or an association via plain Java.

我认为您正在寻找性能分析工具,例如java visualVM,JProfiler或其他..

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM