river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter Firmstone <j...@zeus.net.au>
Subject Re: ServiceDiscoveryManager test coverage
Date Wed, 25 Aug 2010 08:34:34 GMT
Patricia Shanahan wrote:
> On 8/22/2010 4:57 PM, Peter Firmstone wrote:
> ...
>> Thanks Patricia, that's very helpful, I'll figure it out where I went
>> wrong this week, it really shows the importance of full test coverage.
> ...
> I strongly agree that test coverage is important. Accordingly, I've 
> done some analysis of the "ant qa.run" output.
> There are 1059 test description (*.td) files that exist, and are 
> loaded at the start of "ant qa.run", but that do not seem to be run. 
> I've extracted the top level categories from those files:
> constraint
> discoveryproviders_impl
> discoveryservice
> end2end
> eventmailbox
> export_spec
> io
> javaspace
> jeri
> joinmanager
> jrmp
> loader
> locatordiscovery
> lookupdiscovery
> lookupservice
> proxytrust
> reliability
> renewalmanager
> renewalservice
> scalability
> security
> start
> txnmanager
> I'm sure some of these tests are obsolete, duplicates of tests in 
> categories that are being run, or otherwise inappropriate, but there 
> does seem to be a rich vein of tests we could mine.
> Part of the problem may be time to run the tests. I'd like to propose 
> splitting the tests into two sets:
> 1. A small set that one would run in addition to the relevant tests, 
> whenever making a small change. It should *not* be based on skipping 
> complete categories, but on doing those tests from each category that 
> are most likely to detect regression, especially regression due to 
> changes in other areas.
> 2. A full test set that may take a lot longer. In many projects, there 
> is a "nightly build" and a test sequence that is run against that 
> build. That test sequence can take up to 24 hours to run, and should 
> be as complete as possible. Does Apache have infrastructure to support 
> this sort of operation?
> Are there any tests that people *know* should not run? I'm thinking of 
> running the lot just to see what happens, but knowing ones that are 
> not expected to work would help with result interpretation.
> Patricia

Good ideas Patricia, you bring some very valuable test experience, thank 
you.  Any tests that require a KDC server or test jiniproxy - require a 
squid proxy server will fail.

There is also the question of the main build still creating jar files 
for compatibility with earlier Jini platforms, we can take the 
opportunity to remove those archives and update the tests to the new ones.

The jar files to be removed are marked so in the main build.xml file.

I've gotten to the bottom of what's causing the failures, one's a null 
reference, the other a serialization problem, the first problem's easy, 
the second will take a bit more thought.



View raw message