harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Spark Shen" <smallsmallor...@gmail.com>
Subject Re: [buildtest] pass rate definition
Date Fri, 19 Oct 2007 03:14:15 GMT
2007/10/18, Leo Li <liyilei1979@gmail.com>:
>
> On 10/18/07, Alexei Fedotov <alexei.fedotov@gmail.com> wrote:
> > Hello,
> >
> > I'm involved in numerous discussions on the subject, and want to make
> > these discussions transparent to those community members who are
> > interested. Imagine we have a test suite which contains five following
> > tests:
> >
> > BuggyTest.java
> >    The test fails due to the test bug.
> >
> > FailingReferenceTest.java
> >    The test fails on Harmony and passes on RI. The test design does
> > not imply that the test should pass.
> >
> > IntermittentlyFailingTest.java
> >    The test fails intermittently due to HDK bug.
> >
> > UnsupportedTest.java
> >    The test produces an expected fail due to unimplemented
> > functionality in HDK.
> >
> > FailingTest.java
> >    The test fails due to HDK bug.
> >
> > PassingTest.java
> >    This one prints PASSED and completes successfully.
> >
> > What would be the correct formula to define a pass rate? All agree
> > that the rate is a number of passed tests divided to a total number of
> > tests. Then people start to argue what are the numerator and the
> > denominator.
> >
> > One may say, that he counts any failures as bugs. Then she gets 16.66%
> > pass rate. Others get 50%, ignoring all fail reasons except the one
> > which produces a fixable HDK failure.
> >
> > If anyone could share common sense knowledge or Apache practices on
> > the subject, this would be interesting.
> >
> >
>
> I think how to define the passing rate is not only related to the
> reason of the failure tests but as the scope of the tests itself and
> what the passing rate means as well.
>
> Current tests can be separated to two category:
> 1. Tests provided by harmony developers.
> 2. Tests provided by applications.


I like your classification. :-)

For 1, I do not think the passing rate can prove much. Current process
> requires that till the test passes on harmony build it is not checked
> in to the source code if I have not missing something. Although there
> is some exceptions, we try to achieve this goal. Thus Harmony is
> assumed to pass all these tests and if there is a functional missing
> or a known bug to be fixed, the test is supposed not exist in
> harmony's code base and cannot be calculated in the passing rate.


And I have different opinion about UnsupportedTest.java and test fails on
know issue
(Can they be categorized into FailingReferenceTest or
IntermittentlyFailingTest
or FailingTest according to Alexei?) with you.

I understand passing rate as an indicator to show the progress of harmony.
And these test failures indeed
remind us more to be improved. Why should they be excluded?

Of cause, if there is some common sense or best practice, I will stick to
them.


For 2, the passing rate can to some degree reflect harmony's maturity
> and I think normally they need not be differentiated why they do not
> pass. Except for few bugs which is due to the application's improper
> dependency on sun's behavior ( For example, Sean has discovered Jython
> assumes the specified order of the entry stored in HashMap. It is
> actually a bug but just coincides to pass on RI. ), the failure among
> the majority of the tests provided by application can reveal one bug
> or one incompatibility that harmony should try to resolve since we are
> trying to give a compatible product as RI for user to switch
> seamlessly.
>
> > --
> > With best regards,
> > Alexei,
> > ESSD, Intel
> >
>
>
> --
> Leo Li
> China Software Development Lab, IBM
>



-- 
Spark Shen
China Software Development Lab, IBM

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message