harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tim Ellison <t.p.elli...@gmail.com>
Subject Re: test excludes
Date Wed, 26 May 2010 11:20:54 GMT
Ah the good old unit testing discussion -- it must be release time again :-)

(more below)

On 26/May/2010 10:59, Mark Hindess wrote:
> In message <201005250742.o4P7g1wt019461@d06av01.portsmouth.uk.ibm.com>,
> Mark Hindess writes:
>> In message <AANLkTikh3nV56QB8HXSXfzLnI9jrtU-C2cRF1shElZVt@mail.gmail.com>,
>> Nathan Beyer writes:
>>> For future milestone votes, I'd like to propose that we set the
>>> expectation of zero test failures. Every time we do a milestone and I
>>> run the build and tests I find myself looking at the test failures
>>> (there are ALWAYS failures) and ask 'are these expected'? If the
>>> failures are expected, they should be excluded and the tests should
>>> just pass.
>>> If this proposal is workable, immediately after this upcoming release,
>>> all 'expected' failures will be pulled into the exclusion lists.
>> I was thinking this too.  The problem for me is that:
>> 1) Our test coverage isn't brilliant so we need a way to exclude
>> individual tests not the tests for a whole class.  We have too many
>> excluded tests already[0].

I agree that excluding whole classes is too broad.

>> 2) The JDWP test framework seems to pass most of the time on the 5.0
>> code base but seems to fail quite often on the 6.0 branch.  I normally
>> resort to doing 10 test runs and comparing the results of all runs. I
>> typically see every test pass on at least one run.  This leads me to
>> believe that the improved jdwp in the 6.0 branch is merely exposing race
>> conditions in the test framework.  It would be great to fix the test
>> framework so that these are avoided.  If we tried to exclude tests that
>> fail regularly on java6 jdwp we would probably end up removing most of
>> the tests. ;-(
>> I think solving 1 is really a pre-requisite to moving forward with your
>> (excellent) goal.  We should make that a priority after this release.  I
>> don't care if the solution is something fancy with junit 4 annotations
>> or something simple like adding:
>>   if (Support_Excludes.isExcluded()) {
>>     return;
>>   }
>> to problematic tests where the support class just looks up the
>> class/method name of the caller in the exclude lists.  (I actually
>> quite like the idea of doing the exclude list processing lazily at
>> run time rather than generating the list with ant.)
> Last night I decided to try to implement this simple exclude list
> mechanism in my build-improvement branch.  I've checked in my changes at
> https://svn.apache.org/repos/asf/harmony/enhanced/java/branches/mrh@948377

I've not reviewed the changes, but notice there were a lot of
modifications required to the test cases themselves.

> I've automatically added isExcluded() checks to all the test methods
> in previously excluded tests (though this was automated and more
> complicated than you might think, and definitely not 100% accurate
> but should be close enough).  The next job would be to remove/move
> the isExcluded() checks to be as fine-grained as possible so as just
> to exclude the failing tests (or individual asserts).  As I commented
> previously many of the excludes probably just need to be removed
> completely.
> New excludes can be added as before by adding tests names to the lists
> but you also need to add an appropriate isExcluded()-if check to the
> test code as well.  Hopefully this will maximize coverage will still
> allowing us to have only passing tests running by default.

I apologize for having the 'same old discussion' on this topic, but I do
believe that it is the role of the framework to decide which tests to run.

Of course, right now there is only a simple selection based on test*()
method name, and we have discussed previously the ways in which we can
inject more information to help the framework decide if a test is valid
or not.  TestNG and JUnit4 have their own ideas about this too.

Modifying the tests themselves to make them more granular would
certainly help the framework separate out the tests we expect to pass
from those we expect to fail -- having length tests written as a single
test*() method doesn't really help since there is no way of knowing the
dependencies throughout the test code.

Rather than go through the code and decide where to write the
isExcluded() checks, can we split methods up to make them more readily

> If different exclude conditions apply to different platforms you can
> use:
>   package.class#methodname
>   package.class+arbitrary_tag
>   package.class#methodname+arbitrary_tag
>   arbitrary_tag
> to exclude tests - though I'd avoid the final one unless it is something
> really broad like a VM not supporting concurrent (like the IBM v4 VMEs).
> To see what is happening, you can run the tests with:
>   -Dhy.test.vmargs=-Dhy.excludes.verbose=true
> (though this is too verbose I might make the load method less verbose
> or use a hy.excludes.debug=true flag for that output).
> To ignore excludes and run all tests use:
>   -Dhy.test.vmargs=-Dhy.excludes.ignore=true
> Rather than using test-jre-vm-info in ant the vm name for the exclude
> file selection is a trivial guess based on system properties. If you
> are using a custom vm and exclude lists then you can use:
>   -Dhy.test.vmargs=-Dhy.excludes.vm=myvm
> to set it explicitly.
> I'm tempted to do away with the isExcluded() checks/lists are replace
> them with more readable:
>   if (Support_Excludes.isRI()) {
> or:
>   if (Support_Excludes.isLinux() && Support_Excludes.is64bit()) {
> etc.  To make the Excludes more self-contained but that requires
> looking at the excluded tests in more detail to understand the
> requirements better and as I said I wanted to make this a simple
> step from what we had today.
> I'd like to merge this to trunk/java6 after the code freeze but I'd
> like some feedback first.  I don't necessarily see this as a final
> solution but I just wanted to do something since this topic has been
> discussed for far to long and I wanted something that we could move to
> from where we are today without to much effort.  On the other hand,
> I'm not convinced that annotations are a good solution either since
> they don't give you fine-grained control so every distinction has to be
> represented by a separate method.
> Comments very welcome.

My reaction is "yuk!" (well you did ask ;-) and I'd like to consider a
more declarative approach, but you are writing the code and I don't have
a concrete alternative to offer up.  I have no /technical/ objection.

[ Perhaps it's simpler for me to just go fix the tests so they all pass
then the question of how to do excludes right will become moot ;-) ]


View raw message