harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nathan Beyer <ndbe...@apache.org>
Subject Re: Rethinking testing conventions
Date Thu, 11 Mar 2010 03:54:08 GMT
On Wed, Mar 10, 2010 at 3:39 PM, Jesse Wilson <jessewilson@google.com> wrote:
> On Tue, Mar 9, 2010 at 5:01 PM, Nathan Beyer <ndbeyer@apache.org> wrote:
>> One concept I've been working with is using annotations to describe
>> the tests for the purposes of exclusions and for platform definition.
>> The annotations can then be utilized in many ways via JUnit - method
>> rules, request processing filters and others. Here's an example of how
>> the tests might look.
>> class FileTest {
>>  @Test
>>  @Platform(os="windows")
>>  public void testSomethingOnWindows { }
> I would prefer to match tests to their platforms in an external file, rather
> than on the test itself.

Why? What's the value of that separation? In the context of Harmony's
development, I believe that separation has been detrimental as the
maintenance is disconnected, so the excluded and failing tests are
frequently forgotten.

> For example, on Android would like to mark some tests up as known to fail on
> our device, but we don't want to muck up the original test with
> Android-specific stuff. Similarly for webOS and NetBSD and whichever
> platforms are also running Harmony's tests.

These annotations would only affect the execution if the the tool
launching the test run desires it. As such, the annotations could be
completely ignored and would have no effect. Effectively, it's no
different than what we do today.

> Our test runner (to be open sourced soon!) accepts a set of files that
> specify the expected outcome of various tests. When we run our tests against
> the RI, we give it one set of files ("sunjava5expectations.txt"); when we
> run it on Android we give it another set of files
> ("dalvikexpectations.txt").

I don't think this concept would be in conflict with such a tool.

View raw message