xml-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sam Ruby" <ru...@nc.rr.com>
Subject Re: Test Infrastructure Project Proposal
Date Sun, 11 Feb 2001 03:18:10 GMT
David_Marston wrote:
>
> This looks like a good topic for a workshop at ApacheCon. One of the
> most intriguing aspects of this integration testing is governing how
> various modules get plugged in. For example, the Xalan team would like
> to have a complete pipeline of Apache software that works, then plug in
> a revised Xalan piece, holding all other pieces constant, and see if the
> pipeline still works. I'm sure that other projects would like to do the
> same thing.

If the tests are run infrequently, this seems like a reasonable strategy.

If the test are run on a regular basis, and issues identified are addressed
rapidly, then running a current pipeline at each point is practical and more
feasible as it doesn't suffer from the combinatoric explosion of
permutations required.

Example: since Xerces and Xalan are used in the build process for many
projects, they get some minimal testing every time I run my nightly build.
For those who are not aware of this, take a look at:

http://jakarta.apache.org/builds/gump/latest/

If you take a look at the build for xml-fop, you will see it failed.
Because this is run regularly, I know what day the failure was introduced.
By replacing components with prior versions until I got a success, I was
able to isolate the problem to a subproject.  By looking at the changes for
that day for that project, I was able to produce a patch which solves the
problem - despite not being familiar with that code base.

This would be considerably more efficient if the people who *ARE* familiar
with the code bases in question are normally the ones doing the debugging.

How do we make that happen?


Mime
View raw message