couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joan Touzet <>
Subject Re: Use ExUnit to write unit tests.
Date Wed, 22 May 2019 20:16:18 GMT
Hi Ilya, thanks for starting this thread. Comments inline.

On 2019-05-22 14:42, Ilya Khlopotov wrote:
> The eunit testing framework is very hard to maintain. In particular, it has the following
> - the process structure is designed in such a way that failure in setup or teardown of
one test affects the execution environment of subsequent tests. Which makes it really hard
to locate the place where the problem is coming from.

I've personally experienced this a lot when reviewing failed logfiles,
trying to find the *first* failure where things go wrong. It's a huge

> - inline test in the same module as the functions it tests might be skipped
> - incorrect usage of ?assert vs ?_assert is not detectable since it makes tests pass

> - there is a weird (and hard to debug) interaction when used in combination with meck

>    -
>    -
>    - meck:unload() must be used instead of meck:unload(Module)

Eep! I wasn't aware of this one. That's ugly.

> - teardown is not always run, which affects all subsequent tests

Have first-hand experienced this one too.

> - grouping of tests is tricky
> - it is hard to group tests so individual tests have meaningful descriptions
> We believe that with ExUnit we wouldn't have these problems:

Who's "we"?

> - on_exit function is reliable in ExUnit
> - it is easy to group tests using `describe` directive
> - code-generation is trivial, which makes it is possible to generate tests from formal
spec (if/when we have one)

Can you address the timeout question w.r.t. EUnit that I raised
elsewhere for cross-platform compatibility testing? I know that
Peng ran into the same issues I did here and was looking into extending

Many of our tests suffer from failures where CI resources are slow and
simply fail due to taking longer than expected. Does ExUnit have any
additional support here?

A suggestion was made (by Jay Doane, I believe, on IRC) that perhaps we
simply remove all timeout==failure logic (somehow?) and consider a
timeout a hung test run, which would eventually fail the entire suite.
This would ultimately lead to better deterministic testing, but we'd
probably uncover quite a few bugs in the process (esp. against CouchDB
<= 4.0).

> Here are a few examples:
> # Test adapters to test different interfaces using same test suite

This is neat. I'd like someone else to comment whether this the approach
you define will handle the polymorphic interfaces gracefully, or if the
effort to parametrise/DRY out the tests will be more difficulty than
simply maintaining 4 sets of tests.

> # Using same test suite to compare new implementation of the same interface with the
old one
> Imagine that we are doing a major rewrite of a module which would implement the same

*tries to imagine such a 'hypothetical' rewrite* :)

> How do we compare both implementations return the same results for the same input?
> It is easy in Elixir, here is a sketch:

Sounds interesting. I'd again like an analysis (from someone else) as to
how straightforward this would be to implement.


View raw message