gump-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Bodewig <>
Subject Re: Test data wanted
Date Fri, 12 Sep 2008 13:25:59 GMT
On Wed, 10 Sep 2008, <> wrote:

> Good to hear from you again. Thank you for your kind offer of
> help. Given the time I have to give to the task and the rustiness of
> my Unix skills, I suspect it would take as much elapsed time for me
> to ferret out the files as it would for you to gather them.

It turned out that Ant was pretty well suited for the task,

  <zip destfile="${target}/">
    <fileset dir="${gump.base}" includes="**/*.xml" excludes="cvs/">
        <contains text="&lt;testsuite"/>

and about ten minutes of waiting was really all it took to collect all
XML files that contain "<testsuite".  There will certainly be false
positives in the zip like Ant's own AntUnit tests (<antunit> creates
an XML output similar to <junit>) but I'm sure you can extract those.

> When you get a chance, I'd appreciate it if you could put them
> together for me in whatever format is easy for you to create.

vmgump's webserver doesn't serve home dirs anymore and I don't want to
fiddle with it right now, so I've moved the zip over to my place
(about 8MB).  Grab it from
<> and I'll delete it
some time later over the next days.

> Just as an introduction, one of the graphs I created shows the
> number of tests per test suite and the number of failures and errors
> per test suite.

JUnit's notion of a suite and Ant's are not identical, this stems from
the way Thomas Haas and I used JUnit when we wrote the <junit> task in
May/June 2000 and has carried over from there.

Both of us didn't use explicit TestSuites but only classes inheriting
from TestCase and this is Ant's expectation - a <testuite> in Ant's
terms is whatever invoking the static suite() method or automatically
extracting all test methods of a single class yield (or more recently
what you get by wrapping the test class in a JUnit4Adapter).  You get
exactly one <testsuite> for each class the TestRunner has been invoked

> (The x axis is roughly 2^n.) The astonishing thing for me in this
> case is the fairly clear power law distribution in the number of
> failures. Why in the world would that be?

You'd probably need to see how data evolves over time to really see
what happens here.  Are the few cases with many failures simply tests
that are known to fail and that get ignored over time?  Or some sort
of refactoring that left failing tests behind with no time to adapt


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message