harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alexei Zakharov" <alexei.zakha...@gmail.com>
Subject Re: [classlib][testing] excluding the failed tests
Date Fri, 30 Jun 2006 09:24:05 GMT
Hi Nathan,

> I think we may be unnecessarily complicating some of this by assuming that
> all of the donated tests that are currently excluded and failing are
> completely valid. I believe that the currently excluded tests are either
> failing because they aren't isolated according to the suggested test layout
> or they are invalid test

I will give a concrete example. Currently for java.beans we have more
than thousand tests in 50 classes. And about 30% of them fail. These
are not invalid tests, they just came from the origin different from
the one of   java.beans implementation currently in svn.  They mostly
test the compatibility with RI the current implementation has problems
with.

Now I am working on enabling these 30% but this is not such an easy
task. It will take time (need to refactor internal stuff etc). And it
is the standard situation for test class to have for example 30 passed
tests and 9 failed. Since there are failures the whole test class is
excluded. As a result we currently have only 22 test classes enabled
with just 130 test inside. So about thousand (!) passed tests thrown
overboard. IMHO this is not normal situation and we need to find some
solution. At least for the period while these 30% are being fixed.

2006/6/30, Nathan Beyer <nbeyer@kc.rr.com>:
>
> > -----Original Message-----
> > From: Geir Magnusson Jr [mailto:geir@pobox.com]
> > George Harley wrote:
> > > Nathan Beyer wrote:
> > >> Two suggestions:
> > >> 1. Approve the testing strategy [1] and implement/rework the modules
> > >> appropriately.
> > >> 2. Fix the tests!
> > >>
> > >> -Nathan
> > >>
> > >> [1]
> > >>
> > http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.htm
> > l
> > >>
> > >>
> > >
> > > Hi Nathan,
> > >
> > > What are your thoughts on running or not running test cases containing
> > > problematic test methods while those methods are being investigated and
> > > fixed up ?
> > >
> >
> > That's exactly the problem.  We need a clear way to maintain and track
> > this stuff.
> >
> > geir
>
> How are other projects handling this? My opinion is that tests, which are
> expected and know to pass should always be running and if they fail and the
> failure can be independently recreated, then it's something to be posted on
> the list, if trivial (typo in build file?), or logged as a JIRA issue.
>
> If it's broken for a significant amount of time (weeks, months), then rather
> than excluding the test, I would propose moving it to a "broken" or
> "possibly invalid" source folder that's out of the test path. If it doesn't
> already have JIRA issue, then one should be created.
>
> I've been living with consistently failing tests for a long time now.
> Recently it was the unstable Socket tests, but I've been seeing the WinXP
> long file name [1] test failing for months.
>
> I think we may be unnecessarily complicating some of this by assuming that
> all of the donated tests that are currently excluded and failing are
> completely valid. I believe that the currently excluded tests are either
> failing because they aren't isolated according to the suggested test layout
> or they are invalid test; I suspect that HARMONY-619 [1] is a case of the
> later.
>
> So I go back to my original suggestion, implement the testing proposal, then
> fix/move any excluded tests to where they work properly or determine that
> they are invalid and delete them.
>
> [1] https://issues.apache.org/jira/browse/HARMONY-619
>
> >
> > >
> > > Best regards,
> > > George
> > >
> > >
> > >>
> > >>
> > >>> -----Original Message-----
> > >>> From: Geir Magnusson Jr [mailto:geir@pobox.com]
> > >>> Sent: Tuesday, June 27, 2006 12:09 PM
> > >>> To: harmony-dev@incubator.apache.org
> > >>> Subject: Re: [classlib][testing] excluding the failed tests
> > >>>
> > >>>
> > >>>
> > >>> George Harley wrote:
> > >>>
> > >>>> Hi Geir,
> > >>>>
> > >>>> As you may recall, a while back I floated the idea and supplied
some
> > >>>> seed code to define all known test failing test methods in an XML
> > file
> > >>>> (an "exclusions list") that could be used by JUnit at test run
time
> > to
> > >>>> skip over them while allowing the rest of the test methods in a
> > >>>> class to
> > >>>> run [1]. Obviously I thought about that when catching up with this
> > >>>> thread but, more importantly, your comment about being reluctant
to
> > >>>> have
> > >>>> more dependencies on JUnit also motivated me to go off and read
some
> > >>>> more about TestNG [2].
> > >>>>
> > >>>> It was news to me that TestNG provides out-of-the-box support for
> > >>>> excluding specific test methods as well as groups of methods (where
> > the
> > >>>> groups are declared in source file annotations or Javadoc comments).
> > >>>> Even better, it can do this on existing JUnit test code provided
that
> > >>>> the necessary meta-data (annotations if compiling to a 1.5 target;
> > >>>> Javadoc comments if targeting 1.4 like we currently are). There
is a
> > >>>> utility available in the TestNG download and also in the Eclipse
> > >>>> support
> > >>>> plug-in that helps migrate directories of existing JUnit tests
to
> > >>>> TestNG
> > >>>> by adding in the basic meta-data (although for me the Eclipse version
> > >>>> also tried to break the test class inheritance from
> > >>>> junit.framework.TestCase which was definitely not what was required).
> > >>>>
> > >>>> Perhaps ... just perhaps ... we should be looking at something
like
> > >>>> TestNG (or my wonderful "exclusions list" :-) ) to provide the
> > >>>> granularity of test configuration that we need.
> > >>>>
> > >>>> Just a thought.
> > >>>>
> > >>> How 'bout that ;)
> > >>>
> > >>> geir
> > >>>
> > >>>
> > >>>> Best regards,
> > >>>> George
> > >>>>
> > >>>> [1] http://issues.apache.org/jira/browse/HARMONY-263
> > >>>> [2] http://testng.org
> > >>>>
> > >>>>
> > >>>>
> > >>>> Geir Magnusson Jr wrote:
> > >>>>
> > >>>>> Alexei Zakharov wrote:
> > >>>>>
> > >>>>>
> > >>>>>> Hi,
> > >>>>>> +1 for (3), but I think it will be better to define suite()
method
> > >>>>>> and
> > >>>>>> enumerate passing tests there rather than to comment out
the code.
> > >>>>>>
> > >>>>>>
> > >>>>> I'm reluctant to see more dependencies on JUnit when we could
> > control
> > >>>>>
> > >>> at
> > >>>
> > >>>>> a level higher in the build system.
> > >>>>>
> > >>>>> Hard to explain, I guess, but if our exclusions are buried
in .java,
> > I
> > >>>>> would think that reporting and tracking over time is going
to be
> > much
> > >>>>> harder.
> > >>>>>
> > >>>>> geir
> > >>>>>
> > >>>>>
> > >>>>>
> > >>>>>> 2006/6/27, Richard Liang <richard.liangyx@gmail.com>:
> > >>>>>>
> > >>>>>>
> > >>>>>>> Hello Vladimir,
> > >>>>>>>
> > >>>>>>> +1 to option 3) . We shall comment the failed test
cases out and
> > add
> > >>>>>>> FIXME to remind us to diagnose the problems later.
;-)
> > >>>>>>>
> > >>>>>>> Vladimir Ivanov wrote:
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> I see your point.
> > >>>>>>>> But I feel that we can miss regression in non-tested
code if we
> > >>>>>>>> exclude
> > >>>>>>>> TestCases.
> > >>>>>>>> Now, for example we miss testing of
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>> java.lang.Class/Process/Thread/String
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> and some other classes.
> > >>>>>>>>
> > >>>>>>>> While we have failing tests and don't want to pay
attention to
> > >>>>>>>> these
> > >>>>>>>> failures we can:
> > >>>>>>>> 1) Leave things as is - do not run TestCases with
failing tests.
> > >>>>>>>> 2) Split passing/failing TestCase into separate
"failing
> > TestCase"
> > >>>>>>>>
> > >>> and
> > >>>
> > >>>>>>>> "passing TestCase" and exclude "failing TestCases".
When test or
> > >>>>>>>> implementation is fixed we move tests from failing
TestCase to
> > >>>>>>>>
> > >>> passing
> > >>>
> > >>>>>>>> TestCase.
> > >>>>>>>> 3) Comment failing tests in TestCases. It is better
to run 58
> > tests
> > >>>>>>>> instead
> > >>>>>>>> of 0 for String.
> > >>>>>>>> 4) Run all TestCases, then, compare test run results
with the
> > 'list
> > >>>>>>>>
> > >>> of
> > >>>
> > >>>>>>>> known
> > >>>>>>>> failures' and see whether new failures appeared.
This, I think,
> > is
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>> better
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> then 1, 2 and 3, but, overhead is that we support
2 lists - list
> > of
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>> known
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> failing tests and exclude list where we put crashing
tests.
> > >>>>>>>>
> > >>>>>>>> Thanks, Vladimir
> > >>>>>>>> On 6/26/06, Tim Ellison <t.p.ellison@gmail.com>
wrote:
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>>> Mikhail Loenko wrote:
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>> Hi Vladimir,
> > >>>>>>>>>>
> > >>>>>>>>>> IMHO the tests are to verify that an update
does not introduce
> > >>>>>>>>>> any
> > >>>>>>>>>> regression. So there are two options: remember
which exactly
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>> tests may
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>>> fail
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>> and remember that all tests must pass.
I believe the latter
> > >>>>>>>>>> one is
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>> a bit
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>> easier and safer.
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>> +1
> > >>>>>>>>>
> > >>>>>>>>> Tim
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>> Thanks,
> > >>>>>>>>>> Mikhail
> > >>>>>>>>>>
> > >>>>>>>>>> 2006/6/26, Vladimir Ivanov <ivavladimir@gmail.com>:
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>> Hi,
> > >>>>>>>>>>> Working with tests I noticed that we
are excluding some tests
> > >>>>>>>>>>>
> > >>> just
> > >>>
> > >>>>>>>>>>> because
> > >>>>>>>>>>> several tests from single TestCase
fail.
> > >>>>>>>>>>>
> > >>>>>>>>>>> For example, the TestCase 'tests.api.java.lang.StringTest'
> > >>>>>>>>>>> has 60
> > >>>>>>>>>>> tests and
> > >>>>>>>>>>> only 2 of them fails. But the build
excludes the whole
> > TestCase
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>> and we
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>>> just
> > >>>>>>>>>>> miss testing of java.lang.String implementation.
> > >>>>>>>>>>>
> > >>>>>>>>>>> Do we really need to exclude TestCases
in 'ant test' target?
> > >>>>>>>>>>>
> > >>>>>>>>>>> My suggestion is: do not exclude any
tests until it crashes
> > VM.
> > >>>>>>>>>>> If somebody needs a list of tests that
always passed a
> > separated
> > >>>>>>>>>>> target can
> > >>>>>>>>>>> be added to build.
> > >>>>>>>>>>>
> > >>>>>>>>>>> Do you think we should add target 'test-all'
to the build?
> > >>>>>>>>>>>  Thanks, Vladimir
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>> Tim Ellison (t.p.ellison@gmail.com)
> > >>>>>>>>> IBM Java technology centre, UK.
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>> --
> > >>>>>>> Richard Liang
> > >>>>>>> China Software Development Lab, IBM

-- 
Alexei Zakharov,
Intel Middleware Product Division

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Mime
View raw message