Return-Path: Delivered-To: apmail-incubator-harmony-dev-archive@www.apache.org Received: (qmail 37664 invoked from network); 30 Jun 2006 09:24:33 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (209.237.227.199) by minotaur.apache.org with SMTP; 30 Jun 2006 09:24:33 -0000 Received: (qmail 47703 invoked by uid 500); 30 Jun 2006 09:24:27 -0000 Delivered-To: apmail-incubator-harmony-dev-archive@incubator.apache.org Received: (qmail 47661 invoked by uid 500); 30 Jun 2006 09:24:27 -0000 Mailing-List: contact harmony-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: harmony-dev@incubator.apache.org Delivered-To: mailing list harmony-dev@incubator.apache.org Received: (qmail 47650 invoked by uid 99); 30 Jun 2006 09:24:27 -0000 Received: from asf.osuosl.org (HELO asf.osuosl.org) (140.211.166.49) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 30 Jun 2006 02:24:26 -0700 X-ASF-Spam-Status: No, hits=0.5 required=10.0 tests=DNS_FROM_RFC_ABUSE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (asf.osuosl.org: domain of alexei.zakharov@gmail.com designates 64.233.166.180 as permitted sender) Received: from [64.233.166.180] (HELO py-out-1112.google.com) (64.233.166.180) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 30 Jun 2006 02:24:25 -0700 Received: by py-out-1112.google.com with SMTP id c63so169847pyc for ; Fri, 30 Jun 2006 02:24:05 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=Dw2qDdNf1VF/boorOOPgYRd3xU2rWr6Q0PdR46s8RmSQ1C6OCpkNpw5rO0KTrNMRusVsc5gFbEnqYCJbgQg3A4yOmeirT8nFjA4XblPC2+snJOpUZQMCGmoQrybyFTLOT7hII9zNbBNuC5K7K4wnjZH0mmFi9k0yrXNv7D9rgQA= Received: by 10.35.99.14 with SMTP id b14mr364927pym; Fri, 30 Jun 2006 02:24:05 -0700 (PDT) Received: by 10.35.128.18 with HTTP; Fri, 30 Jun 2006 02:24:05 -0700 (PDT) Message-ID: <2c9597b90606300224r48bb0d45nbed9939982c9455f@mail.gmail.com> Date: Fri, 30 Jun 2006 13:24:05 +0400 From: "Alexei Zakharov" To: harmony-dev@incubator.apache.org Subject: Re: [classlib][testing] excluding the failed tests In-Reply-To: <005401c69be3$a571fbf0$0b01a8c0@LITTLEGUY> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <44A397F1.8000207@pobox.com> <005401c69be3$a571fbf0$0b01a8c0@LITTLEGUY> X-Virus-Checked: Checked by ClamAV on apache.org X-Spam-Rating: minotaur.apache.org 1.6.2 0/1000/N Hi Nathan, > I think we may be unnecessarily complicating some of this by assuming that > all of the donated tests that are currently excluded and failing are > completely valid. I believe that the currently excluded tests are either > failing because they aren't isolated according to the suggested test layout > or they are invalid test I will give a concrete example. Currently for java.beans we have more than thousand tests in 50 classes. And about 30% of them fail. These are not invalid tests, they just came from the origin different from the one of java.beans implementation currently in svn. They mostly test the compatibility with RI the current implementation has problems with. Now I am working on enabling these 30% but this is not such an easy task. It will take time (need to refactor internal stuff etc). And it is the standard situation for test class to have for example 30 passed tests and 9 failed. Since there are failures the whole test class is excluded. As a result we currently have only 22 test classes enabled with just 130 test inside. So about thousand (!) passed tests thrown overboard. IMHO this is not normal situation and we need to find some solution. At least for the period while these 30% are being fixed. 2006/6/30, Nathan Beyer : > > > -----Original Message----- > > From: Geir Magnusson Jr [mailto:geir@pobox.com] > > George Harley wrote: > > > Nathan Beyer wrote: > > >> Two suggestions: > > >> 1. Approve the testing strategy [1] and implement/rework the modules > > >> appropriately. > > >> 2. Fix the tests! > > >> > > >> -Nathan > > >> > > >> [1] > > >> > > http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.htm > > l > > >> > > >> > > > > > > Hi Nathan, > > > > > > What are your thoughts on running or not running test cases containing > > > problematic test methods while those methods are being investigated and > > > fixed up ? > > > > > > > That's exactly the problem. We need a clear way to maintain and track > > this stuff. > > > > geir > > How are other projects handling this? My opinion is that tests, which are > expected and know to pass should always be running and if they fail and the > failure can be independently recreated, then it's something to be posted on > the list, if trivial (typo in build file?), or logged as a JIRA issue. > > If it's broken for a significant amount of time (weeks, months), then rather > than excluding the test, I would propose moving it to a "broken" or > "possibly invalid" source folder that's out of the test path. If it doesn't > already have JIRA issue, then one should be created. > > I've been living with consistently failing tests for a long time now. > Recently it was the unstable Socket tests, but I've been seeing the WinXP > long file name [1] test failing for months. > > I think we may be unnecessarily complicating some of this by assuming that > all of the donated tests that are currently excluded and failing are > completely valid. I believe that the currently excluded tests are either > failing because they aren't isolated according to the suggested test layout > or they are invalid test; I suspect that HARMONY-619 [1] is a case of the > later. > > So I go back to my original suggestion, implement the testing proposal, then > fix/move any excluded tests to where they work properly or determine that > they are invalid and delete them. > > [1] https://issues.apache.org/jira/browse/HARMONY-619 > > > > > > > > > Best regards, > > > George > > > > > > > > >> > > >> > > >>> -----Original Message----- > > >>> From: Geir Magnusson Jr [mailto:geir@pobox.com] > > >>> Sent: Tuesday, June 27, 2006 12:09 PM > > >>> To: harmony-dev@incubator.apache.org > > >>> Subject: Re: [classlib][testing] excluding the failed tests > > >>> > > >>> > > >>> > > >>> George Harley wrote: > > >>> > > >>>> Hi Geir, > > >>>> > > >>>> As you may recall, a while back I floated the idea and supplied some > > >>>> seed code to define all known test failing test methods in an XML > > file > > >>>> (an "exclusions list") that could be used by JUnit at test run time > > to > > >>>> skip over them while allowing the rest of the test methods in a > > >>>> class to > > >>>> run [1]. Obviously I thought about that when catching up with this > > >>>> thread but, more importantly, your comment about being reluctant to > > >>>> have > > >>>> more dependencies on JUnit also motivated me to go off and read some > > >>>> more about TestNG [2]. > > >>>> > > >>>> It was news to me that TestNG provides out-of-the-box support for > > >>>> excluding specific test methods as well as groups of methods (where > > the > > >>>> groups are declared in source file annotations or Javadoc comments). > > >>>> Even better, it can do this on existing JUnit test code provided that > > >>>> the necessary meta-data (annotations if compiling to a 1.5 target; > > >>>> Javadoc comments if targeting 1.4 like we currently are). There is a > > >>>> utility available in the TestNG download and also in the Eclipse > > >>>> support > > >>>> plug-in that helps migrate directories of existing JUnit tests to > > >>>> TestNG > > >>>> by adding in the basic meta-data (although for me the Eclipse version > > >>>> also tried to break the test class inheritance from > > >>>> junit.framework.TestCase which was definitely not what was required). > > >>>> > > >>>> Perhaps ... just perhaps ... we should be looking at something like > > >>>> TestNG (or my wonderful "exclusions list" :-) ) to provide the > > >>>> granularity of test configuration that we need. > > >>>> > > >>>> Just a thought. > > >>>> > > >>> How 'bout that ;) > > >>> > > >>> geir > > >>> > > >>> > > >>>> Best regards, > > >>>> George > > >>>> > > >>>> [1] http://issues.apache.org/jira/browse/HARMONY-263 > > >>>> [2] http://testng.org > > >>>> > > >>>> > > >>>> > > >>>> Geir Magnusson Jr wrote: > > >>>> > > >>>>> Alexei Zakharov wrote: > > >>>>> > > >>>>> > > >>>>>> Hi, > > >>>>>> +1 for (3), but I think it will be better to define suite() method > > >>>>>> and > > >>>>>> enumerate passing tests there rather than to comment out the code. > > >>>>>> > > >>>>>> > > >>>>> I'm reluctant to see more dependencies on JUnit when we could > > control > > >>>>> > > >>> at > > >>> > > >>>>> a level higher in the build system. > > >>>>> > > >>>>> Hard to explain, I guess, but if our exclusions are buried in .java, > > I > > >>>>> would think that reporting and tracking over time is going to be > > much > > >>>>> harder. > > >>>>> > > >>>>> geir > > >>>>> > > >>>>> > > >>>>> > > >>>>>> 2006/6/27, Richard Liang : > > >>>>>> > > >>>>>> > > >>>>>>> Hello Vladimir, > > >>>>>>> > > >>>>>>> +1 to option 3) . We shall comment the failed test cases out and > > add > > >>>>>>> FIXME to remind us to diagnose the problems later. ;-) > > >>>>>>> > > >>>>>>> Vladimir Ivanov wrote: > > >>>>>>> > > >>>>>>> > > >>>>>>>> I see your point. > > >>>>>>>> But I feel that we can miss regression in non-tested code if we > > >>>>>>>> exclude > > >>>>>>>> TestCases. > > >>>>>>>> Now, for example we miss testing of > > >>>>>>>> > > >>>>>>>> > > >>>>>>> java.lang.Class/Process/Thread/String > > >>>>>>> > > >>>>>>> > > >>>>>>>> and some other classes. > > >>>>>>>> > > >>>>>>>> While we have failing tests and don't want to pay attention to > > >>>>>>>> these > > >>>>>>>> failures we can: > > >>>>>>>> 1) Leave things as is - do not run TestCases with failing tests. > > >>>>>>>> 2) Split passing/failing TestCase into separate "failing > > TestCase" > > >>>>>>>> > > >>> and > > >>> > > >>>>>>>> "passing TestCase" and exclude "failing TestCases". When test or > > >>>>>>>> implementation is fixed we move tests from failing TestCase to > > >>>>>>>> > > >>> passing > > >>> > > >>>>>>>> TestCase. > > >>>>>>>> 3) Comment failing tests in TestCases. It is better to run 58 > > tests > > >>>>>>>> instead > > >>>>>>>> of 0 for String. > > >>>>>>>> 4) Run all TestCases, then, compare test run results with the > > 'list > > >>>>>>>> > > >>> of > > >>> > > >>>>>>>> known > > >>>>>>>> failures' and see whether new failures appeared. This, I think, > > is > > >>>>>>>> > > >>>>>>>> > > >>>>>>> better > > >>>>>>> > > >>>>>>> > > >>>>>>>> then 1, 2 and 3, but, overhead is that we support 2 lists - list > > of > > >>>>>>>> > > >>>>>>>> > > >>>>>>> known > > >>>>>>> > > >>>>>>> > > >>>>>>>> failing tests and exclude list where we put crashing tests. > > >>>>>>>> > > >>>>>>>> Thanks, Vladimir > > >>>>>>>> On 6/26/06, Tim Ellison wrote: > > >>>>>>>> > > >>>>>>>> > > >>>>>>>>> Mikhail Loenko wrote: > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>>> Hi Vladimir, > > >>>>>>>>>> > > >>>>>>>>>> IMHO the tests are to verify that an update does not introduce > > >>>>>>>>>> any > > >>>>>>>>>> regression. So there are two options: remember which exactly > > >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>> tests may > > >>>>>>> > > >>>>>>> > > >>>>>>>>> fail > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>>> and remember that all tests must pass. I believe the latter > > >>>>>>>>>> one is > > >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>> a bit > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>>> easier and safer. > > >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>> +1 > > >>>>>>>>> > > >>>>>>>>> Tim > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>>> Thanks, > > >>>>>>>>>> Mikhail > > >>>>>>>>>> > > >>>>>>>>>> 2006/6/26, Vladimir Ivanov : > > >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>>>> Hi, > > >>>>>>>>>>> Working with tests I noticed that we are excluding some tests > > >>>>>>>>>>> > > >>> just > > >>> > > >>>>>>>>>>> because > > >>>>>>>>>>> several tests from single TestCase fail. > > >>>>>>>>>>> > > >>>>>>>>>>> For example, the TestCase 'tests.api.java.lang.StringTest' > > >>>>>>>>>>> has 60 > > >>>>>>>>>>> tests and > > >>>>>>>>>>> only 2 of them fails. But the build excludes the whole > > TestCase > > >>>>>>>>>>> > > >>>>>>>>>>> > > >>>>>>>>> and we > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>>>> just > > >>>>>>>>>>> miss testing of java.lang.String implementation. > > >>>>>>>>>>> > > >>>>>>>>>>> Do we really need to exclude TestCases in 'ant test' target? > > >>>>>>>>>>> > > >>>>>>>>>>> My suggestion is: do not exclude any tests until it crashes > > VM. > > >>>>>>>>>>> If somebody needs a list of tests that always passed a > > separated > > >>>>>>>>>>> target can > > >>>>>>>>>>> be added to build. > > >>>>>>>>>>> > > >>>>>>>>>>> Do you think we should add target 'test-all' to the build? > > >>>>>>>>>>> Thanks, Vladimir > > >>>>>>>>>>> > > >>>>>>>>>>> > > >>>>>>>>>>> > > >>>>>>>>>>> > > >>>>>>>>> Tim Ellison (t.p.ellison@gmail.com) > > >>>>>>>>> IBM Java technology centre, UK. > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>> -- > > >>>>>>> Richard Liang > > >>>>>>> China Software Development Lab, IBM -- Alexei Zakharov, Intel Middleware Product Division --------------------------------------------------------------------- Terms of use : http://incubator.apache.org/harmony/mailing.html To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org For additional commands, e-mail: harmony-dev-help@incubator.apache.org