harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Qiu" <sean.xx....@gmail.com>
Subject Re: [buildtest] pass rate definition
Date Thu, 18 Oct 2007 13:21:59 GMT
2007/10/18, Leo Li <liyilei1979@gmail.com>:
>
> On 10/18/07, Alexei Fedotov <alexei.fedotov@gmail.com> wrote:
> > Hello,
> >
> > I'm involved in numerous discussions on the subject, and want to make
> > these discussions transparent to those community members who are
> > interested. Imagine we have a test suite which contains five following
> > tests:
> >
> > BuggyTest.java
> >    The test fails due to the test bug.
> >
> > FailingReferenceTest.java
> >    The test fails on Harmony and passes on RI. The test design does
> > not imply that the test should pass.
> >
> > IntermittentlyFailingTest.java
> >    The test fails intermittently due to HDK bug.
> >
> > UnsupportedTest.java
> >    The test produces an expected fail due to unimplemented
> > functionality in HDK.
> >
> > FailingTest.java
> >    The test fails due to HDK bug.
> >
> > PassingTest.java
> >    This one prints PASSED and completes successfully.
> >
> > What would be the correct formula to define a pass rate? All agree
> > that the rate is a number of passed tests divided to a total number of
> > tests. Then people start to argue what are the numerator and the
> > denominator.
> >
> > One may say, that he counts any failures as bugs. Then she gets 16.66%
> > pass rate. Others get 50%, ignoring all fail reasons except the one
> > which produces a fixable HDK failure.
> >
> > If anyone could share common sense knowledge or Apache practices on
> > the subject, this would be interesting.
> >
> >
>
> I think how to define the passing rate is not only related to the
> reason of the failure tests but as the scope of the tests itself and
> what the passing rate means as well.
>
> Current tests can be separated to two category:
> 1. Tests provided by harmony developers.
> 2. Tests provided by applications.
>
> For 1, I do not think the passing rate can prove much. Current process
> requires that till the test passes on harmony build it is not checked
> in to the source code if I have not missing something. Although there
> is some exceptions, we try to achieve this goal. Thus Harmony is
> assumed to pass all these tests and if there is a functional missing
> or a known bug to be fixed, the test is supposed not exist in
> harmony's code base and cannot be calculated in the passing rate.


+1, these tests should be suppoed to be successful all the time,
since they are designed to guarantee that our modificatons or improvements
are accurate, no regression failures are acceptabel.


For 2, the passing rate can to some degree reflect harmony's maturity
> and I think normally they need not be differentiated why they do not
> pass. Except for few bugs which is due to the application's improper
> dependency on sun's behavior ( For example, Sean has discovered Jython
> assumes the specified order of the entry stored in HashMap. It is
> actually a bug but just coincides to pass on RI. ), the failure among
> the majority of the tests provided by application can reveal one bug
> or one incompatibility that harmony should try to resolve since we are
> trying to give a compatible product as RI for user to switch
> seamlessly.


+1 again.
Why is the application test passrate important to Harmony?
Becasue it indicate that our code qualitifed and worth using.(find bugs of
couse).
This will attract more and more admission.


> --
> > With best regards,
> > Alexei,
> > ESSD, Intel
> >
>
>
> --
> Leo Li
> China Software Development Lab, IBM
>



-- 
Sean Qiu
China Software Development Lab, IBM

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message