harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vladimir Ivanov" <ivavladi...@gmail.com>
Subject Re: [classlib][testing] excluding the failed tests
Date Wed, 05 Jul 2006 05:40:36 GMT
Yesterday I tried to add a regression test to existing in security module
TestCase, but, found that the TestCase is in exclude list. I had to
un-exclude it, run, check my test passes and exclude the TestCase again – it
was a little bit inconvenient, besides, my new valid (I believe) regression
test will go directly to exclude list after integration...

I see that we are near to decision what to do with failing tests.
Am I right that we are at the point of agreement on the following?:

There could be two groups of failing tests:
*Tests that never passed.
*Tests that recently started failing.

Test that never passed should be stored in TestCases with suffix "Fail" (
StringFailTest.java for example). They are subject for review and either
deletion or fixing or fixing implementation if they find a bug in API
implementation.
There should be 0 tests that recently started failing. If such test appears
it should be fixed within 24h, otherwise, commit which introduced the
failure will be rolled back.
Right?

 Thanks, Vladimir

On 7/4/06, Tim Ellison <t.p.ellison@gmail.com > wrote:

> Nathan Beyer wrote:
> > Based on what I've seen of the excluded tests, category 1 is the
> predominate
> > case. This could be validated by looking at old revisions in SVN.
>
> I'm sure that is true, I'm just saying that the build system 'normal'
> state is that all enabled tests pass.  My concern was over your
> statement you have had failing tests for months.
>
> What is failing for you now?
>
> Regards,
> Tim
>
>
> >> -----Original Message-----
> >> From: Geir Magnusson Jr [mailto: geir@pobox.com]
> >>
> >> Is this the case where we have two 'categories'?
> >>
> >>   1) tests that never worked
> >>
> >>   2) tests that recently broke
> >>
> >> I think that a #2 should never persist for more than one build
> >> iteration, as either things get fixed or backed out.  I suppose then we
>
> >> are really talking about category #1, and that we don't have the
> "broken
> >> window" problem as we never had the window there in the first place?
> >>
> >> I think it's important to understand this (if it's actually true).
> >>
> >> geir
> >>
> >>
> >> Tim Ellison wrote:
> >>> Nathan Beyer wrote:
> >>>> How are other projects handling this? My opinion is that tests, which
>
> >> are
> >>>> expected and know to pass should always be running and if they fail
> and
> >> the
> >>>> failure can be independently recreated, then it's something to be
> >> posted on
> >>>> the list, if trivial (typo in build file?), or logged as a JIRA
> issue.
> >>> Agreed, the tests we have enabled are run on each build (hourly if
> >>> things are being committed), and failures are sent to commit list.
> >>>
> >>>> If it's broken for a significant amount of time (weeks, months), then
>
> >> rather
> >>>> than excluding the test, I would propose moving it to a "broken" or
> >>>> "possibly invalid" source folder that's out of the test path. If it
> >> doesn't
> >>>> already have JIRA issue, then one should be created.
> >>> Yes, though I'd be inclined to move it sooner -- tests should not stay
>
> >>> broken for more than a couple of days.
> >>>
> >>> Recently our breakages have been invalid tests rather than broken
> >>> implementation, but they still need to be investigated/resolved.
> >>>
> >>>> I've been living with consistently failing tests for a long time now.
>
> >>>> Recently it was the unstable Socket tests, but I've been seeing the
> >> WinXP
> >>>> long file name [1] test failing for months.
> >>> IMHO you should be shouting about it!  The alternative is that we
> >>> tolerate a few broken windows and overall quality slips.
> >>>
> >>>> I think we may be unnecessarily complicating some of this by assuming
>
> >> that
> >>>> all of the donated tests that are currently excluded and failing are
> >>>> completely valid. I believe that the currently excluded tests are
> >> either
> >>>> failing because they aren't isolated according to the suggested test
> >> layout
> >>>> or they are invalid test; I suspect that HARMONY-619 [1] is a case of
>
> >> the
> >>>> later.
> >>>>
> >>>> So I go back to my original suggestion, implement the testing
> proposal,
> >> then
> >>>> fix/move any excluded tests to where they work properly or determine
> >> that
> >>>> they are invalid and delete them.
> >>> Yes, the tests do need improvements too.
> >>>
> >>> Regards,
> >>> Tim
> >>>
> >>>
> >>>> [1] https://issues.apache.org/jira/browse/HARMONY-619
> >>>>
> >
> >
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >
>
> --
>
> Tim Ellison ( t.p.ellison@gmail.com)
> IBM Java technology centre, UK.
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message