db-ojb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Thomas Mahler <thm...@web.de>
Subject Re: junit test failures
Date Wed, 12 Nov 2003 07:23:42 GMT
Hi Olli,

oliver.matz@ppi.de wrote:
> Hello,
>>-----Original Message-----
>>From: oliver.matz@ppi.de [mailto:oliver.matz@ppi.de]
>>Currently, there are several junit tests failing on my machine:
> [..]
>>Can I compare these results to the 'official' ones?
>>The link http://db.apache.org/ojb/junit-report.html
>>on http://db.apache.org/ojb/maven-reports.html is broken.
> the tests have changed, but this issue is still open.

I'm not sure what you mean with open issue:
1. the failing junit tests
2. the missing junit reports on the web site

or both of them?

> Does someone take care of this?

for 1.: I'm generally working on failing junit tests so that a public 
release has no failing junit tests. IMO it is a must to ship public 
release with no failing junit tests.

for 2.: I never managed to generate the junit test reports with maven 
that's why they are missing on the website.
We now have an ant target junit-report that does generate these reports. 
it should be easy to add them into the web site. I will do so for the 
next release.

> Can we achieve any consensus about how to deal with failing
> tests?  I have stated my opinion about this in another
> thread.

I don't remember what you wrote, but I agree that a consensus is required.

here are some basic guidelines that I'd like to see in place:

A. regression tests
1. Never check in code into CVS without a full junit run.
2. Don't check in code that does produce junit failures or errors.
3. If a bugfix conflicts with existing junit tests the developer that 
checks in that code is responsible for checking if the semantics of the 
old test was really wrong and if it is OK to change them.
If it is OK he is also responsible to change those tests.
If it is not OK we need additional clarification if the bugfix is 
appropriate as it does conflict with valid testcases. This clarification 
should be done on the developer mailinglist.
4. all committers are encouraged to fix junit failures they come across.

B. tests for unimplemented features
1. sometimes it make sense to write testcase that prove that a certain 
feature is not yet implemented. These testcase clearly show that 
additional work has to be done.
To avoid confusion such test should be kept separate from regression 
tests that are used to prove that working features are not corrupted by 
later changes.

I propose to keep those tests in their own junit TestSuite and to run 
them with their own ant target, say junit-unimplemented-features.
By this separation we can use the junit target to prove that we have no 
regression problems. And OTOH see open issues with the other target.

2. Once a missing feature is implemented the coresponding testcases must 
be moved to the AllTests testsuite an become part of the regression tests.

I hope the above sounds reasonable,
cu Thomas

> Olli
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: ojb-dev-unsubscribe@db.apache.org
> For additional commands, e-mail: ojb-dev-help@db.apache.org

To unsubscribe, e-mail: ojb-dev-unsubscribe@db.apache.org
For additional commands, e-mail: ojb-dev-help@db.apache.org

View raw message