cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ovidiu Predescu <>
Subject testing/performance framework for Cocoon
Date Fri, 21 Sep 2001 15:33:13 GMT


One of the most frustrating parts of developing Cocoon is the lack of
a testsuite. Since I've started working with Cocoon, I've seen a
feature working fine now, and completely broken in the next day's CVS
snapshot. As a result I'm trying to come up with some framework to
have automated regression testing, and maybe performance testing as

I've looked at the Cactus framework for server side unit testing
(, and seems like a good start. I
also have some ideas on how one can extend Ant to be able to write
tests directly in its XML build file.

The Cactus and Ant based approaches are two different methodologies
for writing tests. Cactus provides the ability to have the tests
written in Java, using an extension to JUnit.

The Ant based idea I have allows one to write tests directly using Ant
tasks. It looks to me an easier approach for people that don't know
how to write Java, and is also easier for people that want a more
scripting-like approach to writing tests.

The two methodologies do not exclude each other. A person could
implement the tests using either Cactus or Ant, depending on how
complex the test is, or what is that person's preference.

I will not describe how one can use Cactus here. Please refer to its
Web site ( for more information on

The Ant based approach requires some extension tasks to be
defined. Most of them seem to be easy to implement, only one of them
is more difficult.

I've attached an example build.xml for Ant that could be used to
describe some tests and performance tests, so that you get an idea of
what I'm talking about. My current idea is to have the times taken to
perform the tests written into a database, with a timestamp, so that
we can see how the performance improves/degrades over long periods of

Below is a list of Ant extension tasks:

<database> - describe the database used to store the results.

<test> - defines a new test. Each test has a name, and whether the
time taken to execute it should be logged in the database or not. The
assumption is that some tests are really just that, while others are
really performance measurements.

<url-get> - makes an HTTP GET request to an URL

<url-post> - makes an HTTP POST request to an URL

<check> - checks an XPath in the XML document returned by an <url-get>
or <url-post>.

<iterate> - creates a "for" loop so we can do very simple loops

<spawn> - creates a given number of threads and executes the tasks
specified as children. This could be used to simulate multiple clients
connecting to a URL.

With the exception of <check>, all the tasks seem to be easy to

My current plan is to use Cactus (in the Ant mode) as the framework to
drive the tests, and the above Ant methodology as part of their
build.xml driver. Cactus provides some nice features which are worth

I would appreciate any thoughts you might have on this.

Ovidiu Predescu <> (inside HP's firewall only) (my SourceForge page) (GNU, Emacs, other stuff)

View raw message