river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Patrick Wright <pdoubl...@gmail.com>
Subject Re: Ignored tests
Date Tue, 24 Aug 2010 20:02:33 GMT
Hi Patricia

Is there perhaps a solid baseline to test against, for example Jini
2.1 to see how many pass/fails we get?

Thanks for all the hard work

On Tue, Aug 24, 2010 at 9:58 PM, Patricia Shanahan <pats@acm.org> wrote:
> I ran a batch of the previously ignored QA tests overnight. I got 156 passes
> and 64 failures. This is nowhere near as bad as it sounds, because many of
> the failures were clusters of related tests failing in similar ways,
> suggesting a single problem affecting the base infrastructure for the test
> category. Some of the failures may relate to the known regression that Peter
> is going to look at this week.
> Also, it is important to remember that the bugs may be in the tests, not in
> the code under test. A test may be obsolete, depending on behavior that is
> no longer supported.
> I do think there is a good enough chance that at least one of the failures
> represents a real problem, and an opportunity to improve River, that I plan
> to start a background activity looking at failed tests to see what is going
> on. The objective is to do one of three things for each cluster of failures:
> 1. Fix River.
> 2. Fix the test.
> 3. Decide the test is unfixable, and delete it. There is no point spending
> disk space, file transfer time, and test load time on tests we are never
> going to run.
> Running the subset I did last night took about 15 hours, but that included a
> lot of timeouts.
> Patricia

View raw message