hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Konstantin Boudnik <...@boudnik.org>
Subject Re: [DISCUSSION]: Future of Hadoop system testing
Date Thu, 14 Oct 2010 19:55:17 GMT
On Thu, Oct 14, 2010 at 03:44PM, Steve Loughran wrote:
> >On the other hand, there's a fairly large number of cases where no
> >introspection into daemons' internals is required. These can be carried by
> >a simple communication via Hadoop CLI. To name a few: testing ACL refreshes,
> >basic file ops, etc.
> 
> -stuff we aren't testing properly today, you mean.

Yes :)

> >One of the benefits such approach will provide is to facilitate integration of
> >other types of testing into CI infrastructure (read Hudson) and will provide
> >well-supported and familiar for many test development environment, lowering the
> >learning curve for potential contributors who might want to join Hadoop
> >community and helps us to make Hadoop even better product.
> >
> 
> I'd like some JARs containing tests that could be deployed against a

Make sense. And the good part of the approaches I have laid out earlier is
that they all (even shell-based tests executed by a JUnit wrapper) can be
packaged in a form of jars and send around for the cluster validation.

> cluster to QA it, to say "this cluster works", to stress test
> things, and do all the multi-host, multi JVM regression testing that
> we currently don't have formal test suites for. That will include
> HtmlUnit tests against every web page, as well as command line
> stuff.

While HtmlUnit testing might be done on top of Mini*Cluster infrastructure, I
see your point.

> I'd also like this stuff to be somewhat independent of how the
> cluster gets deployed, you just point the test runner at a list of
> machines or a cluster and it works things out and runs the tests.
> That way, whatever CM tooling you have, you can test a cluster

Exactly. One of the points I was quite after when we had discussed a system
testing framework (read Herriot) was to separate it from deployment as much as
possible. Now there's HADOOP-6980 to be worked upon. 

Cos


Mime
View raw message