tomcat-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Craig R. McClanahan" <craig...@apache.org>
Subject Re: tomcat 4 request/response instrumentation
Date Fri, 17 Aug 2001 16:46:32 GMT


On Fri, 17 Aug 2001, Kevin Seguin wrote:

> something that i've started thinking about recently is how to provide hooks
> in tomcat 4 so that some statistics regarding request processing time could
> be collected.  
> 
> off of the top of my head, a couple of interesting stats might be average
> request processing time for all contexts (or webapps) and average request
> processing time for each context.
> 
> so, some questions:
> 
> *) has anybody else considered this?
> *) does anybody else care about this?
> 
> 
> also, would it be possible to use a Valve to accomplish this?

Yep ... a Valve would do this trick just fine, with the caveat that timing
would start *after* your Connector has accepted the request, and *before*
your Connector returns the response to the user.  In essence, you are
measuring the internal processing time, not the communications time.

Here's the basic outline of a performance monitoring valve:

  public class MyFirstValve extends ValveBase {

    public void invoke(Request request, Response response,
                       ValveContext context)
     throws IOException, ServletException {

      long startTime = System.currentTimeMillis();
      context.invoke(request, response);
      long stopTime = System.currentTimeMillis();
      System.out.println("Took " + (stopTime - startTime) + " ms.");

    }

  }

Note that you are free to maintain state information (the start time, in
this case) as local variables on the stack.  Even if there are multiple
simultaneous requests flowing through the Valve, they each happen on a
separate request thread -- with a separate stack.  So, the same design
principles apply here as apply with servlets in general, do not use
instance variables for per-request state information.


>  how are
> valves processed?  are they stacked such that the first valve entered is the
> last one exited?  or are they chained such that one valve is processed after
> the next, and once the last valve is processed, the response is sent?  what
> i'm getting at is, could you put a valve in place such that it could always
> log (or send events) at the beginning of a request and right before the
> response is sent?
> 

I think the most common term for how valves are organized is "pipelined"
-- in fact, Pipeline is the internal Tomcat name for the implementation.  
Another way to look at them is as an implementation of the "Chain of
Responsibility" design pattern in the GoF book.

Essentially the Valves you configured or organized into a structure that
looks like this in ASCII art:

  Request -->          -->         -->
               Valve 1     Valve 2     Servlet
  Response <--         <--         <--

so that Valve 1 sees the request first and passes it on (that's not
actually required, but is what you'd do for a performance monitoring
Valve.  Then Valve 2 does the same.  Eventually, you fall of the list of
configured valves and the container calls your servlet for you.  As the
call stack unwindes, each valve regains control so it can postprocess as
necessary, and then returns.  Ultimately, when Valve 1 returns (to the
Connector), the response is actually sent back to the client.

Valves are added to a pipeline in the order that you declare them with
<Valve> elements in server.xml.  In addition, Tomcat itself adds standard
Valves as needed for some of its processing.  For example, if your web app
uses container managed security, Tomcat automatically adds a valve to do
the extra checks required -- if you don't, this valve is omitted and your
performance doesn't pay any extra costs.

In Tomcat, you also have the choice of where to place the Valve in
server.xml, which controls which subset of requests your Valve will see:

* Nested inside an <Engine> - will see all requests

* Nested inside a <Host> - will see all requests for that virtual host

* Nested inside a <Context> - will see all requests for that web app

so you can be more selective about which requests you want to time.

As an extra added bonus, once you understand Valves, you've got about a
95% understanding of the new Filter API in Servlet 2.3 (which will be
portable, while Valve is specific to Tomcat 4).  If you care about the
performance only within a single web app, you might want to go this way
instead of using a Valve, but either would work.

> tia,
> -kevin.
> 

Craig



Mime
View raw message