db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rajesh Kartha <karth...@gmail.com>
Subject Re: Derby metrics
Date Fri, 05 May 2006 23:23:48 GMT
Kathey Marsden wrote:

> Rajesh Kartha wrote:
>
>>
>> The other useful one that I can think of is:
>> - Test case effectiveness, a ratio of the test cases that yielded 
>> defects against the total # of test cases
>>
>
> Could you explain this a little more?   I don't understand how we 
> would measure this.     When  a  test is created or brought into 
> client testing, it may find some number of bugs then it should pass 
> and continue to pass.    Developers will find certain tests more 
> useful than others as they find they catch issues in their changes, 
> but it seems like that would be hard to measure as the issues would be 
> resolved before they check in.
>
> Kathey
>
>
On second thoughts, I do agree getting this ratio may be tricky in case 
of Derby.

Test case effectiveness typically is one of the indicators to decide if 
the existing suite of test cases needs updating to improve coverage, add 
complexity.
In case of Derby, since we follow the idea of nightlies and clean 
derbyall run before submissions/checkins,  the JIRA defects resulting 
from the test cases could be expected
to be less. However, we occasionally do see code related JIRA issues  
being logged as a result of the test failures, which I assume can be 
used in the measurments.

We could probably  to some extent rely on code coverage numbers to 
address code areas not covered by testing. But  I do think we may also
at some point need to understand 1) over lap among existing test cases 
(redundant testing) 2) how to increase complexity and efficiency of test 
cases

-Rajesh



Mime
View raw message