commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Phil Steitz" <>
Subject Re: [performance] feedback/suggestions
Date Mon, 06 Aug 2007 00:15:43 GMT
Hi Michael.  Many thanks for the feedback and thanks in advance for
any patches that you would like to contribute.  See interpersed.

On 8/5/07, Michael Heuer <> wrote:
> Hello Phil,
> I saw a few more commits this weekend on [performance] and thought you
> might welcome a bit of feedback:
> Use System.nanoTime() instead of System.currentTimeMillis()?
Assuming this is no more intrusive/expensive, could be a good idea.

> In LoadGenerator execute() ex.awaitTermination(...) may throw an
> InterruptedException, you may wish to catch that before calculating the
> summary statistics.

> If your intent is to start all of the client threads at the same time, I
> believe you want to use a pair of CountDownLatches instead of the thread
> pool executor.  See e.g. Listing 5.11 in _Java Concurrency in Practice_
> (Goetz et al., 2006).  Executor.execute() only promises to "execute the
> given command at some time in the future."

I think it might actually be better to start the threads over a
startup interval, which should be configurable.  Patches welcome!

> Perhaps more useful than having the client threads calculate time to wait
> between executions and perform timings would be an implementation of
> ExecutorService that does this instead.  It might also start the client
> threads at the same time.  Maybe a custom class that extends
> RunnableFuture could provide the setUp() and cleanUp() methods and return
> the execution time as the future (see CustomTask in javadoc for
> AbstractExecutorService)?

This is an interesting idea.  The thing that I like about the current
setup is that the threads can keep track of "misses" - times when they
miss a start time and the configuration specifies a bounded pool of
threads that fire requests according to the specified load pattern.
Also, absent misses, the resource latency does not confound the load
(i.e., the threads start when they are supposed to, net of response
time). The overhead associated with timing computation, starting,
init, etc., is also included in the per thread delay time with the
current setup.  An alternative would be to have a manager start up and
run threads at the desired frequency, but care would have to be taken
to ensure that the manager's accounting did not screw up actual

> What method(s) do you use to monitor the pool and/or the database when
> running these performance tests?

Very crude now.  On the db side, monitor logs or even primitive things
like ps (postgres), or use db monitoring tools (Sybase).

 You mention "instrumented dbcp and pool
> jars" and database server log files in the comments for DBCP-212.

I am working on getting instrumentation into trunk for both dbcp and
pool.  Have to be careful about performance impacts and decide on
naming, etc.  What I use locally are jars build from release versions
with jdk logging added.

 I have
> generated RRDTool-style connection graphs based on queries against
> Oracle's V$SESSION view; if similar data are available for other databases
> such might make a useful companion tool.  Both JRobin [1] and RRD4J [2]
> are LGPL however.

Sharing results would be great.

Thanks again for the feedback and pls do not hesitate to submit
patches.  I just published a web site for [performance] and will get a JIRA
category set up for issues.


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message