river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dan Creswell <dan.cresw...@gmail.com>
Subject Re: Benchmark organization
Date Mon, 28 Feb 2011 14:26:11 GMT
Cool, so I'm thinking we probably have some stuff that can be done component
by component and some that can't meaningfully.

I imagine for example we could do conformance on a per service basis but
there's other things that also need conformance testing (generic lookup and
discovery) and I reckon realistically that's one we run across everything
all in one go ultimately.

I think also we need to talk about which bits are asap and which are
long-running....

On 28 February 2011 13:16, Peter <jini@zeus.net.au> wrote:

> Good response, lets follow it through: What are the test scenrios? When do
> we run them and under what circumstances?
>
> Unit test minor changes, perform often during modification, try to catch
> obvious bugs asap.
> Then Simulate a network environment and test integration, but limit it to
> relevant tests.
> Conformance testing - have we broken something, the law of unintended
> consequences.
> Regression test, did we fall for an old mistake.
> Concurrency test, have our changes brought other bugs to the surface.
> The implementation looks good, test everything, has anything broken, if so
> why?
> Performance test, what impact have the changes had on performance?  Before
> and after.
> Test deployment, in a network environment, multiple machines.
>
> Rinse, lather, repeat from beginning at any stage.
>
> Then like Patricia suggests, record alternative implementations, perhaps
> with reasons for rejection / acceptance.
>
> Thoughts / refinement?
>
> ----- Original message -----
> > K, so....
> >
> > On 28 February 2011 11:27, Peter <jini@zeus.net.au> wrote:
> >
> > > > I'm not yet ready to buy modules as being a decent divide point for
> > > tests.
> > > > Tests are typically associated with packages, not modules IMHO.
> > > >
> > > > Feels like an impedance mismatch or maybe I'm not liking how much the
> > > > build/deploy/package mechanism is a factor in a discussion about
> testing.
> > >
> > >
> > > Currently we have:
> > >
> > > Unit tests: junit
> > > Integration tests: qa test harness.
> > > Jini platform compliance tests: qa test harness
> > > Regression tests: jtreg
> > >
> > > On top of that we've got the discovery and join test kit.
> > >
> > > We're very lucky to have such a vast array of tests.
> > >
> > >
> > We are indeed.
> >
> >
> > > It would be nice if I could just work on a small part of what is River,
> > > improve it using test driven development by only running the tests
> needed.
> > >
> >
> > Yep - that's a requirement for sure. But the tests you run for that
> probably
> > don't include all the tests available. Probably just unit and service
> level
> > or similar. Probably not performance, maybe some compliance stuff etc.
> >
> >
> > >
> > > It seems that you must first build before you can test, sometimes it is
> > > wise to find the right questions before looking for the answer.
> > >
> > >
> > Well, yes you must first build but the tests you run aren't always
> directly
> > connected to that build. I might:
> >
> > (1) Be just building some module which maps closely to some service and
> thus
> > want to run service-specific tests.
> > (2) Be running some set of performance tests against nitty-gritty
> > implementation details within a single module.
> > (3) Be running some global set of tests across a number of services, only
> > one of which I've changed. Sounds strange right? Nope, if I'm halfway
> > through making a global change and I've updated some of the services, I
> want
> > to see how I'm doing and if I broke anything non obvious.
> >
> >
> >
> > > We don't build junit or jtreg every time we test, why can't the qa
> harness
> > > be separate too?
> > >
> > >
> > It can, but separation is merely a small element of the overall problem.
> > Some tests span, some tests run against APIs, some run against
> nitty-gritty
> > implementation bits. In essence, tests don't always nicely separate with
> > modules IMHO, sometimes they do, sometimes not.
> >
> >
> > > Why can't a component such as Outrigger be kept separate from the jini
> > > platform build?  Why can't it have the tests that must be run to test
> it be
> > > known in advance, or compiled in advance for that matter?
> > >
> > > Why isn't it easy to determine the tests that need to be run to test a
> > > component?
> > >
> >
> > We're not always talking a component IMHO. Or at least, not a component
> that
> > is relevant at the build system level and herein lies the problem.
> >
> >
> > >
> > > Is there a better solution?
> > >
> > >
> > Dunno, but I'm thinking a clean separation of build/module/packaging from
> > test might be the way to go. Testing lifecycle and the things it touches
> are
> > unrelated to modules and such the build system yields and we want to
> account
> > for that.
> >
> >
> > > If not I'll have to face facts: my development windows of opportunity
> don't
> > > allow enough time to adequately test changes.  Maybe modularity is not
> the
> > > answer, maybe I'm grasping at straws.  I don't know, these things take
> time
> > > to figure out.
> > >
> > > Can anyone else see the problems?
> > >
> > > Anyone here know the right questions?
> > >
> >
> > What are all our testing scenarios and their associated lifecycles (when
> are
> > they run and against what)?
> >
> >
> > >
> > > With the right questions, answers are easier to find.
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message