river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Patricia Shanahan <p...@acm.org>
Subject Re: ant run-tests does too much
Date Sat, 20 Nov 2010 15:13:43 GMT
Sim IJskes - QCG wrote:
> On 11/20/2010 02:03 PM, Patricia Shanahan wrote:
>> See http://www.patriciashanahan.com/debug/index.html for how I approach
>> debug. In the debug loop, I think of a theory about what is going wrong,
>> design an experiment to test it, and run the experiment.
> 
> I've scanned the mentioned url, the experiments you write about, do you 
> consider these different than unit tests?

Different, though in some cases experiments may inspire unit tests.
Experiments may involve questions about internal behavior of methods.

During debug, I don't just care whether a method works or not. If it
does not work, I need to know exactly why not. During testing, I usually
look at a method as a black box, and just try to find out if it always
does what its documentation says it does.

> In my approach a unit tests can be designed as soon as an error is 
> reproducable, and this unit test can be used to verify the fix of the 
> error. The other already existing unit tests can be run to check for 
> regressions. Is it just a choice of words where we differ, or do you see 
> a real difference in experiments and unit tests? Or do i use the term 
> unit test too broadly.

My River debug efforts have all been in cases in which we have a test,
but it was not being run. In those case I see no need to write a new
test, just make sure the existing tests get run.

If a bug comes in as a user report, there are at least two bugs, the bug
in the product code, and the bug in the test process that let it get out
in the field. Both need to be fixed, and usually that requires at least
one more test.

One problem that I'm having with River is bugs I can see by code
inspection, but have not yet been able to reproduce in a test. That is
always a difficult case.

Patricia

Mime
View raw message