river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Costers <jonathan.cost...@googlemail.com>
Subject Re: ServiceDiscoveryManager test coverage
Date Wed, 01 Sep 2010 11:03:16 GMT
> As an external, non-committing but interested observer, I agree with
> Patricia and Jonathan. The team I've been working with switched to
> using branches extensively in the last year. Developers open branches
> "per feature" and synchronize/pull from trunk as stable changes are
> merged into trunk. Code reviews can also be done before the
> integration merge. In our case, we also serialize integrations so that
> as a rule no integration takes place when trunk is unstable, e.g. as
> soon as an integration breaks trunk, it's rolled back as a unit, trunk
> is re-verified, and the developer gets in line to try again later.

That has been my experience too in maintaining large projects.
Esp. the reviewing part is critical IMO. Branching off facilitates that to
be done more rigorously.
Heck, it would be amazing if the (rather complex) changes that have been
made (to class loading, policy granting, remote events, etc.) were done
right the first time by a single person without much peer review. So indeed,
I believe these changes belong more in a feature/experimentation branch for
now, until we get our arms around the issues they are causing.

River is a bit unusual by current testing standards IMO as the test
> suites take a very long time to run, making integration a bigger
> effort. It seems like current best practice regarding testing is to
> rely more on mocking and to keep tests running in as short a time as
> possible, to receive feedback quickly. It does look like a few
> person-days of work were just lost by having to track down and isolate
> the failures.

Yes, very true. That is the reason we are adding more and more unit tests as
we go (mainly thanks to Peter and Patricia) that can be run quickly, as
opposed to QA tests that take a long time. I believe we now have about 200
JUnit tests, as opposed to zero a year or so ago. You may have noticed that
I did not backout those.

In light of that, we now have two Hudson build jobs:
- one that builds the project, runs unit tests and creates release artifacts
(10min max)
- one that builds the project and runs the QA suite (with the current number
of tests, about 500, this takes 3.5hrs)
As a measure to be able to integrate and verify changes more quickly.

Of course, the library of unit tests we have at this point does not at all
cover all the code yet, but we are steadily increasing coverage.
Same with the QA suite, we are adding categories and tests as we go,
increasing coverage there too.
Eventually, we should be able to rely on the fact that if unit tests pass,
the project is stable with fair amount of certainty.
QA tests would be run to verify whether the build is actually releasable
(integration issues, conformance to spec, regression, etc.)

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message