harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vladimir Ivanov" <ivavladi...@gmail.com>
Subject Re: [classlib][testing] excluding the failed tests
Date Thu, 06 Jul 2006 09:52:19 GMT
More details: it is
org/apache/harmony/security/tests/java/security/SecureRandom2Test.java test.
At present time it has 2 failing tests with messages about SHA1PRNG
algorithm (no support for SHA1PRNG provider).
Looks like it is valid tests for non implemented functionality, but, I'm not
sure what to do with such TestCase(s): comment these 2 tests or move them
into separate TestCase.
Ideas?

By the way, probably, it worth reviewing *all* excluded TestCases and:
1.      Unexclude if all tests pass.
2.      Report bug and provide patch for test to make it passing if it
failed due to bug in test.
3.      Report bug (and provide patch) for implementation to make tests
passing, if it was/is bug in implementation and no such issue in JIRA.
4.      Specify reasons for excluding TestCases in exclude list to make
further clean-up process easier.
5.      Review results of this exclude list clean-up activity and then
decide what to do with the rest failing tests.

I can do it starting next week. Do you think it worth doing?
 Thanks, Vladimir


On 7/6/06, Nathan Beyer <nbeyer@kc.rr.com> wrote:
>
> Did the TestCase run without a failure? If it didn't, then I would ask for
> you to attempt to fix it and post the patch and the patch to enable it. If
>
> it did pass, then just post a patch to enable it or just submit the issue
> as
> ask it to be removed from the exclude list.
>
> If the test is failing because of a bug, then log an issue about the bug
> and
> try to fix the issue.
>
> -Nathan
>
> > -----Original Message-----
> > From: Vladimir Ivanov [mailto:ivavladimir@gmail.com]
> > Sent: Wednesday, July 05, 2006 12:41 AM
> > To: harmony-dev@incubator.apache.org
> > Subject: Re: [classlib][testing] excluding the failed tests
> >
> > Yesterday I tried to add a regression test to existing in security
> module
> > TestCase, but, found that the TestCase is in exclude list. I had to
> > un-exclude it, run, check my test passes and exclude the TestCase again
> -
> > it
> > was a little bit inconvenient, besides, my new valid (I believe)
> > regression
> > test will go directly to exclude list after integration...
> >
> > I see that we are near to decision what to do with failing tests.
> > Am I right that we are at the point of agreement on the following?:
> >
> > There could be two groups of failing tests:
> > *Tests that never passed.
> > *Tests that recently started failing.
> >
> > Test that never passed should be stored in TestCases with suffix "Fail"
> (
> > StringFailTest.java for example). They are subject for review and either
>
> > deletion or fixing or fixing implementation if they find a bug in API
> > implementation.
> > There should be 0 tests that recently started failing. If such test
> > appears
> > it should be fixed within 24h, otherwise, commit which introduced the
> > failure will be rolled back.
> > Right?
> >
> >  Thanks, Vladimir
> >
> > On 7/4/06, Tim Ellison <t.p.ellison@gmail.com > wrote:
> >
> > > Nathan Beyer wrote:
> > > > Based on what I've seen of the excluded tests, category 1 is the
> > > predominate
> > > > case. This could be validated by looking at old revisions in SVN.
> > >
> > > I'm sure that is true, I'm just saying that the build system 'normal'
> > > state is that all enabled tests pass.  My concern was over your
> > > statement you have had failing tests for months.
> > >
> > > What is failing for you now?
> > >
> > > Regards,
> > > Tim
> > >
> > >
> > > >> -----Original Message-----
> > > >> From: Geir Magnusson Jr [mailto: geir@pobox.com]
> > > >>
> > > >> Is this the case where we have two 'categories'?
> > > >>
> > > >>   1) tests that never worked
> > > >>
> > > >>   2) tests that recently broke
> > > >>
> > > >> I think that a #2 should never persist for more than one build
> > > >> iteration, as either things get fixed or backed out.  I suppose
> then
> > we
> > >
> > > >> are really talking about category #1, and that we don't have the
> > > "broken
> > > >> window" problem as we never had the window there in the first
> place?
> > > >>
> > > >> I think it's important to understand this (if it's actually true).
> > > >>
> > > >> geir
> > > >>
> > > >>
> > > >> Tim Ellison wrote:
> > > >>> Nathan Beyer wrote:
> > > >>>> How are other projects handling this? My opinion is that tests,
> > which
> > >
> > > >> are
> > > >>>> expected and know to pass should always be running and if
they
> fail
> > > and
> > > >> the
> > > >>>> failure can be independently recreated, then it's something
to be
> > > >> posted on
> > > >>>> the list, if trivial (typo in build file?), or logged as a
JIRA
> > > issue.
> > > >>> Agreed, the tests we have enabled are run on each build (hourly
if
> > > >>> things are being committed), and failures are sent to commit list.
> > > >>>
> > > >>>> If it's broken for a significant amount of time (weeks, months),
> > then
> > >
> > > >> rather
> > > >>>> than excluding the test, I would propose moving it to a "broken"
> or
> > > >>>> "possibly invalid" source folder that's out of the test path.
If
> it
> > > >> doesn't
> > > >>>> already have JIRA issue, then one should be created.
> > > >>> Yes, though I'd be inclined to move it sooner -- tests should
not
> > stay
> > >
> > > >>> broken for more than a couple of days.
> > > >>>
> > > >>> Recently our breakages have been invalid tests rather than broken
> > > >>> implementation, but they still need to be investigated/resolved.
> > > >>>
> > > >>>> I've been living with consistently failing tests for a long
time
> > now.
> > >
> > > >>>> Recently it was the unstable Socket tests, but I've been seeing
> the
> > > >> WinXP
> > > >>>> long file name [1] test failing for months.
> > > >>> IMHO you should be shouting about it!  The alternative is that
we
> > > >>> tolerate a few broken windows and overall quality slips.
> > > >>>
> > > >>>> I think we may be unnecessarily complicating some of this
by
> > assuming
> > >
> > > >> that
> > > >>>> all of the donated tests that are currently excluded and failing
> > are
> > > >>>> completely valid. I believe that the currently excluded tests
are
>
> > > >> either
> > > >>>> failing because they aren't isolated according to the suggested
> > test
> > > >> layout
> > > >>>> or they are invalid test; I suspect that HARMONY-619 [1] is
a
> case
> > of
> > >
> > > >> the
> > > >>>> later.
> > > >>>>
> > > >>>> So I go back to my original suggestion, implement the testing
> > > proposal,
> > > >> then
> > > >>>> fix/move any excluded tests to where they work properly or
> > determine
> > > >> that
> > > >>>> they are invalid and delete them.
> > > >>> Yes, the tests do need improvements too.
> > > >>>
> > > >>> Regards,
> > > >>> Tim
> > > >>>
> > > >>>
> > > >>>> [1] https://issues.apache.org/jira/browse/HARMONY-619
> > > >>>>
> > > >
> > > >
> > > >
> > > >
> ---------------------------------------------------------------------
> > > > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > > > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > > > For additional commands, e-mail:
> harmony-dev-help@incubator.apache.org
> > > >
> > > >
> > >
> > > --
> > >
> > > Tim Ellison ( t.p.ellison@gmail.com)
> > > IBM Java technology centre, UK.
> > >
> > > ---------------------------------------------------------------------
> > > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> > >
> > >
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message