ant-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <>
Subject Re: How to filter out all exceptions from the JUnit task's test failures
Date Wed, 02 Aug 2006 18:17:10 GMT
James Adams wrote:
>  >Steve Loughran wrote:
>  >
>  >Take the XSL stylesheets that junit report uses and rework them to 
> meet your needs.
>  >
> Thanks Steve -- I'm not sure if this will solve my problem completely, 
> although it may be a good temporary fix.
> I want to reduce the number of exception lines in the results XML file, 
> *before* the XSL is applied to it, since my test suite is failing with 
> an out-of-memory error.  I assume that JUnit is keeping the exception 
> stack traces for each failure in memory, in anticipation of writing it 
> to the XML file once the suite finishes, and in my case before the suite 
> has a chance to complete it crashes due to the memory usage.  I also 
> want to reduce the amount of stack trace info reported because it is 
> extraneous information (after the first exception or two), but I 
> primarily need to limit it in order to keep from running out of memory 
> when running a suite of over 80 test cases.
> There doesn't appear to be an option in the JUnit task which allows this 
> sort of control over the exception stack trace reported for a test 
> failure -- can anyone suggest another approach?

The problem with Ant's junit runner is that it builds up the entire DOM 
in memory, for a single test. If the test kills the process, boom, no log.

It's possibly not so much the stack traces as the whole collection of 
everything that is causing the problem. Log messages, in particular, 
take up lots of space.

The real solution for you is going to be a new junit testrunner, one 
which generates a different XML output. We are not planning on one for 
Ant1.7, though I may work with the junit people to do one in junit4.x

If you look at my apache con slides on system testing, , you 
can see a pointer to the junit test runner I wrote for smartfrog 
( This does the following things

-decouples test runner from test reporter. They communicate (via RMI), 
sending serialized test results over the wire. A failure of the test 
runner does not impact the reporter if they are in different processes.

-has test reporters that can listen for data coming from different 
hosts. A single reporter could handle test suite results coming in from, 
say, four junit processes each interop testing against a different SOAP 
endpoint, or the same unit tests running on 8 different machines.

-has the following test reporters (which just collect results and do 
things with them)
   1. statistics reporter (which collects stats)
   2. chaining reporter (which reports to a set of reporters, possibly 
   3. text reporter (to the console)
   4. XML reporter. With a different schema from Ant, one which can 
handle streamed output. I stream out logs, exceptions &c and stick the 
summary at the end, not the beginning.
   5. XHTML reporter. This is just a subclass of (4), because XHTML is, 
well, XML. It avoids having to do the XSL stage, and is still valid XML 
for other XSL work. I know, there is that phrase "microformats" coming 
to mind...

I'm going to put most of my junit dev effort into improving this test 
framework, plus some for putting good junit4 support into the junit4 
codebase, including bits from my code that I want (like a good 
serialisation of test results, including log data, exception trace)

If you want to use it, myself and others in the smartfrog team can 
help.Otherwise, short term, I would recommend

-set up <junit> to fork for every test case.
-if there is one test case that is failing a lot, split it into two 
(create an abstract base class with the setup/teardown, and subclasses 
with smaller suites in.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message