harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sebb <seb...@gmail.com>
Subject Re: test excludes
Date Wed, 26 May 2010 13:50:25 GMT
On 26/05/2010, Tim Ellison <t.p.ellison@gmail.com> wrote:
> On 26/May/2010 12:26, sebb wrote:
>  > On 26/05/2010, Tim Ellison <t.p.ellison@gmail.com> wrote:
>  >> On 26/May/2010 11:20, sebb wrote:
>  >>  > On 26/05/2010, Mark Hindess <mark.hindess@googlemail.com> wrote:
>  >>  >
>  >>  >>   On the other hand,
>  >>  >>  I'm not convinced that annotations are a good solution either since
>  >>  >>  they don't give you fine-grained control so every distinction has
to be
>  >>  >>  represented by a separate method.
>  >>  >
>  >>  > That seems like a positive benefit to me. If a test method has several
>  >>  > asserts which are independent, then the first fail may be masking
>  >>  > later ones.
>  >>
>  >>  So how do you know which of the asserts are independent?  Letting them
>  >>  all fall through would seem wrong, since you would likely need to fix
>  >>  the first assertion failure and retest for the later dependent
>  >>  assertions to be meaningful.
>  >
>  > If a subsequent test cannot possibly work unless the current assertion
>  > succeeds, then the subsequent test is not independent.
>  >
>  > e.g.
>  >
>  > Fetch item from Map
>  > assertNonNull(item) // must succeed else next assert is bound to fail
>  > assertEquals("abc",item.getName())
>  >
>  > This may mean more shorter tests (and possibly some duplication of
>  > setup) but it does allow for better control over expected failures.
>  >
>  > [Note that assertions in JUnit setUp() methods work fine]
>
>
> Yes, I understand the definition.  My question is how do you
>  (practically) find them in the 26,500 JUnit tests that we are currently
>  running.
>
>  You're not expecting me to read though the code and decide which are
>  independent and which are not, are you?!

Not for every test.

But the ones that fail need to be investigated anyway, and at that
point the opportunity should be taken to decompose them.

If necessary, log a message to say that a particular test may fail on
some hosts.

If the test always fails on a particular host, then conditionally skip
that test, but log a message to say it has been skipped.

Suppressing the test so that no output appears seems like a bad idea to me.

>
>  >>  Without being able to distinguish I think we'd just get lots more
>  >>  failures listed but no way of knowing which ones are false positives.
>  >>
>  >>  Here's somebody's solution to making the assertions non-fatal.
>  >>  http://www.gnufoo.org/junit/index.html#failures
>  >
>  > Which looks fine, except that non-fatal assertions only help for
>  > *independent* tests.
>  >
>  > Otherwise they generate more output than necessary - e.g. my example
>  > above would generate at least two failures where one is sufficient.
>  >
>  > Independent tests need independent test methods. Not always easy to do
>  > in all cases
>
>
> Agreed.
>
>  Regards,
>
> Tim
>

Mime
View raw message