river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dan Creswell <dan.cresw...@gmail.com>
Subject Re: Benchmark organization
Date Mon, 28 Feb 2011 12:13:39 GMT
K, so....

On 28 February 2011 11:27, Peter <jini@zeus.net.au> wrote:

> > I'm not yet ready to buy modules as being a decent divide point for
> tests.
> > Tests are typically associated with packages, not modules IMHO.
> >
> > Feels like an impedance mismatch or maybe I'm not liking how much the
> > build/deploy/package mechanism is a factor in a discussion about testing.
>
>
> Currently we have:
>
> Unit tests: junit
> Integration tests: qa test harness.
> Jini platform compliance tests: qa test harness
> Regression tests: jtreg
>
> On top of that we've got the discovery and join test kit.
>
> We're very lucky to have such a vast array of tests.
>
>
We are indeed.


> It would be nice if I could just work on a small part of what is River,
> improve it using test driven development by only running the tests needed.
>

Yep - that's a requirement for sure. But the tests you run for that probably
don't include all the tests available. Probably just unit and service level
or similar. Probably not performance, maybe some compliance stuff etc.


>
> It seems that you must first build before you can test, sometimes it is
> wise to find the right questions before looking for the answer.
>
>
Well, yes you must first build but the tests you run aren't always directly
connected to that build. I might:

(1) Be just building some module which maps closely to some service and thus
want to run service-specific tests.
(2) Be running some set of performance tests against nitty-gritty
implementation details within a single module.
(3) Be running some global set of tests across a number of services, only
one of which I've changed. Sounds strange right? Nope, if I'm halfway
through making a global change and I've updated some of the services, I want
to see how I'm doing and if I broke anything non obvious.



> We don't build junit or jtreg every time we test, why can't the qa harness
> be separate too?
>
>
It can, but separation is merely a small element of the overall problem.
Some tests span, some tests run against APIs, some run against nitty-gritty
implementation bits. In essence, tests don't always nicely separate with
modules IMHO, sometimes they do, sometimes not.


> Why can't a component such as Outrigger be kept separate from the jini
> platform build?  Why can't it have the tests that must be run to test it be
> known in advance, or compiled in advance for that matter?
>
> Why isn't it easy to determine the tests that need to be run to test a
> component?
>

We're not always talking a component IMHO. Or at least, not a component that
is relevant at the build system level and herein lies the problem.


>
> Is there a better solution?
>
>
Dunno, but I'm thinking a clean separation of build/module/packaging from
test might be the way to go. Testing lifecycle and the things it touches are
unrelated to modules and such the build system yields and we want to account
for that.


> If not I'll have to face facts: my development windows of opportunity don't
> allow enough time to adequately test changes.  Maybe modularity is not the
> answer, maybe I'm grasping at straws.  I don't know, these things take time
> to figure out.
>
> Can anyone else see the problems?
>
> Anyone here know the right questions?
>

What are all our testing scenarios and their associated lifecycles (when are
they run and against what)?


>
> With the right questions, answers are easier to find.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message