commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gary Gregory <GGreg...@seagullsoftware.com>
Subject RE: [codec][lang] Provide a test jar plus [daemon]
Date Tue, 06 Apr 2010 16:51:57 GMT
Hi Jörg...

> -----Original Message-----
> From: Jörg Schaible [mailto:joerg.schaible@gmx.de]
> Sent: Tuesday, April 06, 2010 01:17
> To: dev@commons.apache.org
> Subject: RE: [codec][lang] Provide a test jar plus [daemon]
> 
> Hi Gary,
> 
> Gary Gregory wrote at Monday, 5. April 2010 18:16:
> 
> > Seeing the discussion about [daemon] and not releasing made me think of
> > another use for a test jar file.
> >
> > What I would like to know when evaluating an RC for releasing a
> > maintenance of a commons component (from x.y.n to x.y.n+1) is that there
> > is 100% binary compatibility.
> >
> > As part of the build I would run (at least) the 1.0.2 unit tests against
> > the 1.0.3 RC. If 100% pass all is well, if not, it is either a bug or a
> > known acceptable failure fixing a bug and should be documented somehow,
> > probably in a ticket.
> >
> > This would mean that a release 1.0.3 RC would include foo-test-1.0.2.jar.
> > And maybe also foo-test-1.0.0.jar and foo-test-1.0.1.jar, hm...
> >
> > Thoughts?
> 
> In practice the test jar is often not feasible. You will need the test
> sources, apply possible patches, compile the result and run the tests
> against that. 

Yes, I expect the test jar to contain as much as possible to be helpful in debugging, including
test fixtures (data files, whatever) as well as the test sources. No patches, the point is
to find out what happens when you run the old tests against the current code. You want to
find incompatibilities.

> This is simply because you will run sometimes in a situation
> where an old unit test verifies a wrong behavior that has been fixed in the
> new version. Therefore you need IMHO also a jar with the test sources.
> 
> If you really want to use the test jar as produced by Maven as default, you
> will need a mechanism for skipping single test fixtures or declare their
> failing as expected.

Yes, I thought I outlined this here:

> > As part of the build I would run (at least) the 1.0.2 unit tests against
> > the 1.0.3 RC. If 100% pass all is well, if not, it is either a bug or a
> > known acceptable failure fixing a bug and should be documented somehow,
> > probably in a ticket.

Just to rephrase and make it clearer (in my mind at least.)

When you run the tests in an test jar, tests will pass or fail. They will fail for one of
two reasons:

(1) A regression bug: The current code has changed and the behavior is no longer compatible.

(2) A bug has been fixed and that breaks old tests which expected the buggy behavior.

In the case of (1), each failure should be reported as a real problem ("red bar" in JUnit
parlance)

In the case of (2), each failure should be documented and reported as a passing test.

This will require some new kind of document to track this information to be compiled by hand.
Maybe something like:

<KnownTestFailures>
   <FixedBug id="LANG-1234" class="FooTest" method="testFoo" />
   ...

Maybe this can be codified differently, I am not sure, TDB really.

Gary
> 
> - Jörg
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org

Mime
View raw message