db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "David W. Van Couvering" <David.Vancouver...@Sun.COM>
Subject Re: Monitoring/improving code coverage (was Re: code coverage results for trunk - svn revision no 390306)
Date Mon, 03 Apr 2006 23:31:18 GMT
I think it would be great to have more awareness about code coverage, 
esepcially if it gets worse.  Getting people to work on it is a 
different question, and I agree it may be difficult.  I know I would be 
alarmed if someone checks in a complicated feature and the code coverage 
for those packages is say 10%, and at a minimum a discussion should 
ensue.  But as it stands we don't even know when that happens.

I liked having a goal for each release of improving the code coverage by 
some modest amount -- incremental improvement.

Another area where I think there would be value in increasing awareness 
is if we have a complexity analysis tool and some package jumps in 
complexity by 50% after a checkin...  But that's another itch for 
another email thread.


Daniel John Debrunner wrote:
> David W. Van Couvering wrote:
>>I like the idea of having it as a release barrier, and I also like the
>>idea of getting an email saying "Code Coverage Regression" and printing
>>out the package(s) that have regressed below a low-water mark.
> I'm not sure a release barrier will work. If the coverage is low in a
> certain area and no-one has the itch to work on it then is there no release?
> Think about how the coverage gets low in the first place. Someone
> contributes an improvement with some amount of testing.
> I think it's reasonable to reject such a contribution if there are
> no-tests. Without tests there is no easy way for a committer to
> determine if the feature even works.
> Now if tests are contributed that shows the feature basically works, but
> have low code coverage, is there really a justification to reject the
> contribution? The feature basically works according to the original
> contributor's itch, it's someone else's itch that more tests exist. One
> can always request more tests from the contributor, but I'm not sure you
> can force it.
>>What I am at a loss for is what the low-water mark should be.   I think
>>whatever we choose, we are going to have some immediate regressions.
>>Then the question becomes, how much work are we willing to put into this
>>to get it fixed.
>>One approach that comes to mind is to set a reachable goal for each
>>release as a step along the way to our ultimate goal.  For right now, a
>>regression could be if any package goes 10% below what our current
>>baseline is.  Then we try to raise the water each release and re-set our
> Not sure how we can get people to scratch the code coverage itch. It
> seems we can't get a lot of folks interested in fixing the 150+ bugs out
> there, never mind writing more tests that might uncover more bugs. I
> would love it if we could find a way.
> Bottom line is that if people don't care about code coverage they are
> not going to work on improving it.
> Dan.

View raw message