lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dawid Weiss <>
Subject Re: Annotation for "run this test, but don't fail build if it fails" ?
Date Wed, 09 May 2012 08:12:31 GMT
> That was really the main question i had, as someone not very familiar with
> the internals of JUnit, is wether it was possible for our test runner to
> make the ultimate decision about the success/fail status of the entire
> run based on the annotations of the tests that fail/succed

There are two things that need to be distinguished:

1) the "runner" is what's passed to @RunWith(Runner.class). The runner
is what is given a suite class and runs its tests (propagating
test execution events to any interested listeners). We use
RandomizedRunner which manages certain things on top of the default
JUnit runner (thread groups, custom annotation for seeds, custom test
methods in junit3 style, etc.).

2) ant's task for executing JUnit tests (junit4). This one is
responsible for collecting suites, forking jvms and managing
listeners. It is also responsible for failing ant's build (by throwing
an exception) if requested -- see "haltonfailure" property here
This is consistent with ANT's default runner.

There is some confusion about the two -- in Lucene we use both but you
could run suites annotated with RandomizedRunner using any container
you want (including standard ANT's <junit> task).

Unfortunately this also means that whether a build is failed or not is
a direct consequence of if any of the tests failed or not. There are
no other conditions for this (including complex conditions you

> I know that things like jenkins are totally fine with the idea of a build
> succeeding even if some of the junit testsuite.xml files contain failures

This is the "haltonfailure" option. You can actually override it from
command line in Lucene build scripts and run full build without
stopping on errors -- see ant test-help:

  [echo] # Run all tests without stopping on errors (inspect log files!).
  [echo] ant -Dtests.haltonfailure=false test

> health) but the key question is could we have our test runner say "test
> X failed, therefore the build should fail" but also "test Y failed, and
> test Y is annotated with @UnstableTest, therefore don't let that failure
> fail th entire build.

Not really. Not in a way that would be elegant and fit into JUnit
listener infrastructure. Read on.

> Ultimatley i think it's important that these failures be reported as
> faulres -- because that's truly what they are -- we shouldn't try to
> sugar coat it, or pretend something happened that didn't.   Ideally these


> I think a Version 2.0 "feature" would be to see agregated historic stats
> on the pass/fail rate of every test, regardless of it's annotation, so we
> can see:
>  a) statistically, how often does test X fail on jenkins?
>  b) statistically, how often does test X fail on my box?
>  c) statistically, how often does test X fail on your box? oh really -
> that's the same stats that Pete is seeing, but much higher then anyone
> else including jenkins and you both run Windows, so maybe there is a
> platform specific bug in the code and/or test?

This is an interesting idea and I think this could be done by adding a
custom report and some marker in an assumption-ignore status... The
history is doable (much like execution times currently)... but it's

>  1) "ant test"
>    - treats @UnstableTest the same as @AwaitsFix
>    - fails the build if any test fails
>  2) "ant test-unstable"
>    - *only* runs @UnstableTest tests
>    - doesn't fail the build for any reason
>    - puts the result XML files in the same place as "ant test"
>      (so jenkins UI sees them)
>  3) jenkins runs "ant test test-unstable"

This is doable by enabling/disabling test groups. A new build plan
would need to be created that would do:

ant -Dtests.haltonfailure=false -Dtests.awaitsfix=true
-Dtests.unstable=true test

both awaitsfix and unstable would be disabled by default (and
assumption-ignored) so this wouldn't affect anybody else. The above
would run all tests without stopping on errors. A post-processing
script could then parse json reports (or XML reports) and collect
historical statistics.

Doable, but honestly this seems like more work (scripts for collecting
stats, test groups are trivial) than trying to fix those two or three
tests that fail?


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message