geronimo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jason Dillon <>
Subject Re: Testsuite and maven-maven-plugin
Date Thu, 28 Sep 2006 08:56:41 GMT
On Sep 27, 2006, at 8:40 PM, Prasad Kashyap wrote:
> Custom packaging or lifecycle phase bindings
> ---------------------------------------------------------------------
> The geronimo-maven-plugin (g-m-p) goals like start/stop server,
> deploy/undeploy modules will be executed in each of these pom.xmls.
> The start-server and deploy-module goals will be bound to some early
> phase in the lifecycle, say "validate" (example only) using the
> @phase. The stop-server and undeploy-module goals will be bound to a
> later phase, say "install". The tests will be executed in the "test"
> phase in between the modules deploy and undeploy.

I think it is inappropriate to overload phases intended for a jar  
package to be used for integration tests.  As I have mentioned a few  
times before, it is easy enough to create a new packaging which has  
an appropriate set of phases, probably something like:

  # validate
  # generate-test-sources
  # process-test-sources
  # generate-test-resources
  # process-test-resources
  # test-compile
  # pre-integration-test
  # test
  # post-integration-test
  # verify

And then deploy-module is bound to pre-integration-test and undeploy- 
module is bound to post-integration-test.

> In each *-testsuite,  multiple modules can be deployed and a subset of
> tests can be run against them. Eg: ATest runs on module1, BTest runs
> on module2 and module3, CTest runs on module3 etc.

I believe we want something more along the lines of:


Where the *-testsuite poms contain the start-server, stop-server and  
run each module* in a separate child maven execution with the maven- 
maven-plugin.  Each module can define a set of artifacts to deploy/ 

This allows different combinations of deployments to be tested  
against the same running server instance.  It also provides a  
flexible setup to allow for better organization of tests and their  
deployment configuration.  And, since each module is executed in a  
child maven process, the configuration of that execution is more  
flexible and not limited to working under the same profile that was  
used to start/stop the server.

This is very similar to how the current testsuite is already setup.

> Exception handling
> ----------------------------
> Only the exceptions from start-server and stop-server goals should
> stop the test run completely. Failures from deploy-modules and
> undeploy-modules should be logged and the  test should continue.
> Failures from TestNG testcases should be logged and ignored. Tests
> should continue.

Fair enough.

> Cleanup will done by the stop-server goal at the end. The start-server
> configuration in each *-testsuite should have the <refresh> set to
> true. This will ensure that each suite will start with a clean slate.

While I agree this may be needed (to use <refresh>) hopefully our  
tests will also function as expected with/out the refresh... as I  
hope that undeploy-module will function.  So, I would extend the  
"cleanup" on to modules which deploy, then should also undeploy, so  
that the next module run does not have the previous modules clutter  
running in the server.

> Reporting
> --------------
> The goals in g-m-p already support multiple reporting mechanisms
> (SurefireReporter already implemented). The TestNG testcases will be
> logged and reported by surefire anyway. So we have one reporting story
> in place.  Similar to the way g-m-p goals handles other reporter
> implementations, we have to write a goal that will handle the TestNG
> test exceptions for other reporters too.

I believe that in general we will eventually want a reporting plugin,  
which can take a configurable set of "collectors" which can scrape  
build output (like surefire reports, pmd stats, clover results, etc)  
and then do something magical with them (like update a database, or  
inject the data into a reporting engine like pentaho)... and maybe  
one day when Maven has a built in rich reporting system we can use  
that instead... assuming its cooked enough.

If we had such a plugin, then some of the reporter bits we added to g- 
m-p can be removed.  g-m-p would only need to worry about dumping log  
files and maybe generating a properties file with more details.  Then  
we would have a "collector" impl which would know how to take the g-m- 
p output files and massage them into a rich reporting system.

I think the bulk of these collectors will take text inputs from  
existing plugins (ie. no mods to the plugin) and then generate some  
xml files, which would then be massaged by some reporting system  
specific handler, which may upload to a reporting server or insert  
into a database, yada yada yada.

Build flow (for a given *-testsuite), probably ends up something like:

     generate build ticket (used to track all child module reports)
     do pre-integration-test (start-server, blah)
     for each module {
         run tests
         run plugins (like pmd, clover, etc)
         collect reports
     do post-integration-test
     publish reports

Published reports for a given run will all be indexed together by the  
build ticket, so it will be easy to see what results are for which  
build.  In this example "generate build ticket", "collect reports"  
and "publish reports" would all be handled by the reporting plugin I  
am talking about, the rest is the existing testsuite bits.

I think this will work out quite well... giving us flexibility w/o  
making our exiting plugins more complicated or waiting for other  
plugin developers to make modifications for us (which is not very  
likely to happen anyways).


I think that the reporters we have now (in g-m-p) is fine for a while  
though... but as soon as we need to start adding similar reporter- 
like functionality to other plugins, then we should really consider  
implementing a more generic reporting system that can function on  
plugin outputs alone (or using a maven reporting api if/when it  
becomes available).

  * * *

Overall I think the testsuite work of late is positive movement for  
our community to be able to start building a more comprehensive set  
of integration and system tests... NOW.

I believe that we do need to start working on making some real  
tests... so that we can validate that the framework we have now is  
sufficient, and if not refactor it now before it sits for too long.


View raw message