couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dale Harvey <>
Subject Re: [PROPOSAL] Improving the quality of CouchDB
Date Thu, 10 Sep 2015 09:40:02 GMT
I dont think CI is a dream, It should take an afternoon at most to get
CouchDB setup on travis on a single platform to ensure no major regressions
come though. If anyone wants help doing that feel free to ping me on
#pouchdb / #couchdb, we already test couchdb master in travis.

I do think multirepos is an issue and that solution is not so simple, we
went through the same issues with pouchdb as we attempted to split out our

The basic problem is, X is a downstream component of CouchDB in its own
repo, X may have its own unit tests etc, however I can still make a change
to X, have it all passing but it can still complete break the CouchDB
suite. This also entirely break git bisect so not only are breakages more
likely but they are significantly harder to debug.

The 2 main solutions to that work to some degree for us

1. Dont split out into lots of repositories, if you put those components
inside the CouchDB repo, then they will get the CouchDB tests run against
them when changes are made and you wont break the CouchDB repo.

2. Anything that does live outside the CouchDB repo, pin their version
inside the CouchDB repo, dont have commits to subproject X automatically be
applied to CouchDB, that means you can commit whatever you want to X but
CouchDB will still be working, when you come to update the version of X you
pinned, you can see that it breaks and not update until it is fixed.

  2.5 Another strategy that can help with this is to have the full CouchDB
test suite run inside X, so changes to X will run the full CouchDB suite
with an updated X


On 9 September 2015 at 19:39, Alexander Shorin <> wrote:

> On Wed, Sep 9, 2015 at 8:13 PM, Robert Kowalski <> wrote:
> >> So, what is the immediate next step that you propose to fix or improve?
> >> What is the most squeaky wheel, or the most modest gain that we can
> make?
> >
> > I think one of the biggest problems is that we don't have a working CI
> and
> > that PRs are getting merged without running it. The other problem is that
> > we have a lot of untested changes going into master.
> >
> > For a step by step plan I suggest:
> >
> >  - Setup a simple CI that makes it easy to test before a merge, e.g. like
> > this [1]. The CI can be really simple, just 1 Erlang version and one OS -
> > but let's get started somewhere. Do you have other ideas how to solve CI,
> > PRs and the multi-repo-approach?
> Who will setup and host this CI machine? I guess, we don't speak about
> CI within ASF infra here.
> Solution of multirepo PRs is simple: use GitHub API to subscript to PR
> comments and changes. With special comment author sets references to
> other PR's. Subscription to changes is required to catch PR updates.
> Special comment is required to prevent false start when CI will try to
> build PR without related ones. Also it should be able to catch new
> references over time e.g. because author add missed tests to another
> repository after CI green build.
> >  - With a working CI system for the multi-repo setup merging should just
> > happen with a green CI. Flaky tests should get fixed or deleted.
> Need to define policy what to do if CI is red by a reason not related to
> PR.
> >  - We would require that every change needs tests if not covered
> elsewhere
> > already. This might include that untestable code has to be refactored
> > first, a problem many legacy applications face (btw a good book on this
> is
> > "Working Effectively with Legacy Code". This would automatically make our
> > code more testable and increase test coverage over time in a
> > very efficient way. This would also allow us to add features while slowly
> > refactoring the code.
> What tools will be used to make this requirement mandatory?
> And the final note:  what is due to date (approximately) for all of this?
> While CI is a dream, what we can do to tolerate this problem *now*?
> For instance we can:
> - Reviewer who said +1 must first run full tests locally and ensure they
> are ok;
> - Reviewer must ensure that tests to the related changes are included;
> and so on. Anything else?
> --
> ,,,^..^,,,

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message