db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Matrigali <mikem_...@sbcglobal.net>
Subject Re: [jira] Commented: (DERBY-1116) Define a minimal acceptance test suite for checkins
Date Thu, 16 Mar 2006 20:31:37 GMT
I also do not believe running derbyall is a requirement currently.

So far I run derbyall whenever I can.  Recently for some patches that 
are obviously only affecting backup code I have been running storeall, 
and then paying attention to the tinderbox runs to make sure I didn't
miss anything.

It is obvious from the kinds of issues we have seen over the past few
months that no matter how much you test there may be an issue on a 
platform that you don't have access to.  So a committer should not only
worry about how much tests he can run, but should try to help out 
watching the change in test results after a commit.

I believe the minimum set of tests a commiter should run is what they
believe is the "right" set of tests.  For me I would say most of my
commits I have run derbyall, some I have run just a single suite, and
some test/javadoc changes just that test or javadoc build.

Daniel John Debrunner (JIRA) wrote:
>     [ http://issues.apache.org/jira/browse/DERBY-1116?page=comments#action_12370592 ]

> Daniel John Debrunner commented on DERBY-1116:
> ----------------------------------------------
> Is there any actual requirement to run derbyall at the moment?
> I think a goal for everyone is to have derbyall running clean at all times on all platforms,
which is not quite the same.
> I found this thread which talks about what folks do:
> http://www.nabble.com/Running-derbyall-before-submitting-patches-t30537.html#a86475
> I think I've mentioned before that expecting people to run derbyall on all platforms
is unrealistic and therefore not a requirement,
> and running derbyall on a single common platform/jdk is sufficient, but I'm not sure
anyone has said it must be done.
> If we do have a mandated smaller subset then we would need some guidelines on what to
do when there are multiple commits,
> each that ran the minimal set, but we end up with several derbyall failures. How do we
(who?)  match failures to commits, how do we decide
> which commit to revert? If one of those commits ran derbyall before contributing, then
is that change blessed and the others suspect?
> I've always assumed a model of run tests the contributor thinks are sufficient, maybe
for some changes it's derbyall, for some it's none, for modifying
> a single test it's that test, for xa code it's the xa suite and the jdbcapi suite, etc.
etc.  Having the great tinderbox and regression testing is a huge help here, as you say, to
running reduced testing. The downside is when multiple failures exist due to reduced contributor
testing then it can effect a lot of other folks, in different time-zones. If I break derbyall
at 5pm pacific time then it can effect the Europeans and Tomihito for a complete day until
I come back the next day and address it, unless someone else has the itch to fix my mistake.
> I didn't understand your comment about "being tempted to run a smaller set of tests,
but being blocked by others running derbyall".
> Is this because you are sharing test machines or something else?
>>Define a minimal acceptance test suite for checkins
>>         Key: DERBY-1116
>>         URL: http://issues.apache.org/jira/browse/DERBY-1116
>>     Project: Derby
>>        Type: Improvement
>>  Components: Test
>>    Reporter: David Van Couvering
>>    Priority: Minor
>>Now that we have an excellent notification system for tinderbox/nightly regression
failures, I would like to suggest that we reduce the size of the test suite being run prior
to checkin.   I am not sure what should be in such a minimal test, but in particular I would
like to remove things such as the stress test and generally reduce the number of tests being
run for each subsystem/area of code.
>>As an example of how derbyall currently affects my productivity, I was running derbyall
on my machine starting at 2pm, and by evening it was still running.  At 9pm my machine was
accidentally powered down, and this morning I am restarting the test run.
>>I have been tempted (and acted on such temptation) in the past to run a smaller set
of tests, only to find out that I have blocked others who are running derbyall prior to checkin.
 For this reason, we need to define a minimal acceptance test (MATS) that we all agree to
run prior to checkin.
>>One could argue that you can run your tests on another machine and thus reduce productivity,
but we can't assume everybody in the community has nice big test servers to run their tests
>>If there are no objections, I can take a first pass at defining what this test suite
should look like, but I suspect many others in the community have strong opinions about this
and may even wish to volunteer to do this definition themselves (for example, some of you
who may be working in the QA division in some of our Big Companies :) ).

View raw message