ant-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Ant Wiki] Update of "Proposals/EnhancedTestReports" by SteveLoughran
Date Tue, 27 Nov 2007 12:56:00 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Ant Wiki" for change notification.

The following page has been changed by SteveLoughran:
http://wiki.apache.org/ant/Proposals/EnhancedTestReports

The comment on the change is:
introduce the whole idea

New page:
= Enhanced Test Reporting =

This page looks at ideas for enhancing the output of the JUnit test reports that we generate
in <junit> and turn into HTML with <junitreport>

== Strengths of current format ==

 1. Ubiquitous
 1. Scales well to lots of JUnit tests on a single process
 1. Integrates with CI servers
 1. Produced by lots of tools  : <junit>, antunit, maven surefire, testng
 1. Consumed by: <junitreport>, CruiseControl, Luntbuild, Bamboo, Hudson
 1. Includes log output, JVM properties, test names
 1. Reasonable HTML security (we sanitise all output but test names in the HTML)

== Limitations of current format ==

 * Storing summary information as root node attributes prevents us from streaming output
 * JVM crashes mid-test result in empty XML files; no opportunity for post mortem
 * No metadata about tests stored other than package/name, which we assume is always java
package format
 * No machine information recorded other than JVM properties, hostname and wall time.
 * No direct integration with issue tracking systems (which bug is this a regression test
of, who to page)
 * No notion of skipped tests, timed out, other failure modes.
 * Output is logged but log4j/commons-logging/java-util log output is not split up into separate
events
 * Output from a single process is logged; no good for multi-process/multi-host testing
 * XSLT transform uses too much memory for Java5 XSLTC engine unless you add more memory with
-Xmx.
 * Normal transformed layout doesnt work for running the same test across many machines/configurations.
 * There could be more datamining opportunities if more system state was recorded (e.g. detect
which platforms/configurations trigger test failure)
 * stack traces saved as simple text, not detailed file/line data with the ability to span
languages
 * No way to attach artifacts such as VMWare images to test results


Summary: it was good at the time, but as testing has got more advanced, we need to evolve
the format (carefully)

== Radical Alternatives ==

Here are some fairly radical alternate designs that SteveLoughran has played with in the context
of running tests under SmartFrog.

 * Standard serializable Java types for tests. Must include log entries and exceptions. These
can be marshalled over RMI or serialised to text files. Enables a tight coupling of reporting
across processes. It is however, hard to maintain stability, especially with OSS code that
can be changed by anyone. The limit of these type's use would probably be the junit test runner
and ant itself, both from the same point release of Ant.

 * Streamed XHTML with standard class names. Here an inline CSS sheet provides the styles,
and tests are streamed to disk as marked up div/span clauses. oes not directly scale well
to presenting very large test runs; postprocessing is required. XSL can still generate alternate
reports, though the XPath patterns are more complex //div[@class="error"] instead of //error.


 * Atom. Here every test result would be presented as an Atom entry, possibly using streamed
XHTML as above. Enables remote systems to poll for changes, and for browsers to present results
as is. Does not directly scale well to presenting very large test runs; postprocessing is
required. XPath is even more complex. 

 * Perl Test format : [http://perldoc.perl.org/Test/Harness/TAP.html TAP] is a text only format;
very simple. 

== Evolutionary Alternatives ==

 *  add the key missing features from the reports: metadata, skipped tests, and scope for
logging more machine configuration data
 * add a placeholder for test runners to add their own stuff
 * add stack trace data in a more structured form when tests fail on java5+
 * base metadata to include: machine info, test description, links and issue IDs (for binding
to issue tracking systems)
 *  ''add new features in streaming friendly manner''
 *  ''add handling for JVM crashes'' Stream results to a different process for logging, so
that JVM crash truncates the log instead of killing it. 


== Interested Parties ==

Add the names of people actively participating

* Ant team (ant-dev)
* SmartFrog team (SteveLoughran)
* TestNG: AlexandruPopescu 
* Maven SureFire: DanFabulich, BrettPorter

== Plan and Timetable ==

For Ant we'd be looking at ant1.8.0 and any 1.7.2 release for these changes; gives us time
to test with all the CI servers. We could do a junit4 test runner with early output. Testng
are on a faster release cycle. 

 1. Start wiki, discussions on various lists (ant-user, testng-user)
 1. define first relaxng schema
 1. set up vms (EC2? Apache VM farm?) with the various CI servers
 1. add new <junit> reporter, test its output works with existing consumers
 1. evolve XSL stylesheets
 1. add junit4 test runner. Let TestNG, SmartFrog do their prototyping
 1. ship!

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
For additional commands, e-mail: dev-help@ant.apache.org


Mime
View raw message