couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Dionne <>
Subject Re: [PATCH] Eunit Tests
Date Mon, 16 Feb 2009 12:37:21 GMT

On Feb 16, 2009, at 7:21 AM, Jan Lehnardt wrote:

> On 16 Feb 2009, at 12:49, Robert Dionne wrote:
>> Take  couch_btree for instance.  It already has a test() function  
>> which if you look closely exercises most of the public functions,  
>> so one could readily put these in a separate module ( which I  
>> actually did when I first started playing with runner) .  However  
>> consider chunkify or modify_node. I suppose one could have a  
>> separate couch_btree_tests module that just includes couch_btree  
>> in order to test these or one could export those functions. I'm  
>> not familiar with Erlang internals to know if these approaches  
>> significantly alter the code paths. How would you test something  
>> like chunkify?
> Pure unit-test logic would suggest creating a couch_btree_chunker
> module. Like how I did with couch_config and couch_config_writer.
> But then, pragmatism will call heresy and I'd probably agree.

For sure, "In theory there's no difference between theory and  
practice, but in practice there is"  :)

> (For the record, desk-sharing Alex (remember, my resident
> TDD-guru :), would break the chunkify function out.)
>> In any event my point is only that you need to do both. I think  
>> the EUnit folks intended this judging from the documentation[1].
> The only problem I see here is that tests belonging to a single module
> are run in different places and you'd have to track down where the  
> test

   hmm... actually according to the docs if you run eunit:test 
([couch_btree])  it does both (I haven't tested this yet). In other  
words it ensures couch_btree:test() is run and also finds and runs  
tests in couch_btree_tests. So if I read this correctly it solve this  
issue, non?  Oh I see, it wouldn't tell you which ones failed?

> lives, first. But we could make things a little more complex and  
> make the
> test output verbose enough to show you what went wrong where.
> Something like:
> Testing Private APIs...passed.
> Testing Public APIs...passed.
> Testing HTTP API...passed.
> But then, we could do it all inline. At the moment, this looks
> like the most KISS solution. But we're doing ground-work
> here and getting it right now is probably more important.
> Another aspect is looooooooooooooooong files with loads
> of unit-tests that are better broken up for easier handling :)
> In any case, a good discussion, I think, keep it up!
> Cheers
> Jan
> --

View raw message