db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Matrigali <mikem_...@sbcglobal.net>
Subject Re: Cleaning up test failures
Date Thu, 19 Jan 2006 22:51:39 GMT

I like the idea of adding a new component to JIRA, and to encourage 
anyone who sees a problem in test run to file an issue there.  I believe
it is already the case that a JIRA should be reported for any nightly 
test failure, adding the component will just make it much easier to 
track.  I would
encourage anyone to quickly report the problem even if it means not
providing the level of detail one would usually add to a bug 
description.  Adding any more information after investigating is always 
appreciated, but the quicker the community sees the issue maybe the 
quicker it will get fixed and at least others can add more info such as 
they also see it in different environments or it passed for them in xyz

A new component makes it very easy to see in JIRA the browse project 
page will automatically highlight the category and allow one to one 
click see outstanding test issues.

Having said this I am not opposed to any of the proposed email or wiki 
pages, but I think it is important to use JIRA to track the issues so 
they don't get lost.  It would be nice if any high level test failure 
reporting could somehow refer to associated JIRA entry's for details.

Kathey Marsden wrote:
> Daniel John Debrunner wrote:
>>>In order to increase awareness, I am proposing that an email is sent to
>>>derby-dev after each tinderbox and nightly test run sending out the test
> The thing about such mails is:
> 1) They do not have enough historical context.
> 2) People in general get used to ignoring what they see everyday, a mail
> that says "nightly test report"
> I like Mike's idea of using Jira and it would conform to our distributed
> model and get these issues the attention the deserve.
> 1) Add component "regression test failure"
> 2)  Whoever is running/checking results will  either
>     1) File a new issue for new failures. Put the name of the test in
> the summary.
>         Include  environment information, diff in the bug and put in
> component:
>        "regression test failure".   Any probable suspect submissions can
> be pointed out.
>       2) Add a comment to old issues that the test failed that day.
> This will get the attention of those who need to really look at the
> issue and provide clear tracking of when
> the issue first started, how often intermittent issues occur and when
> the issue is resolved. Also folks can
> run  a quick query to see if their failure  is a known issue.
> Kathey

View raw message