harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From George Harley <george.c.har...@googlemail.com>
Subject Re: [classlib] Testing conventions - a proposal
Date Tue, 18 Jul 2006 16:05:35 GMT
Andrew Zhang wrote:
> On 7/18/06, Alexei Zakharov <alexei.zakharov@gmail.com> wrote:
>>
>> Hi,
>>
>> George wrote:
>> > > Thanks, but I don't see it as final yet really. It would be great to
>> > > prove the worth of this by doing a trial on one of the existing
>> modules,
>> > > ideally something that contains tests that are platform-specific.
>>
>> I volunteer to do this trial for beans module. I'm not sure that beans
>> contains any platform-specific tests, but I know for sure it has a lot
>> failed tests - so we can try TestNG with the real workload. I also
>> like to do the same job with JUnit 4.0 and compare the results -
>> exactly what is simpler/harder/better etc. In the real.
>
>
> Alexei, great! :)
>
> If Andrew does the same job for nio we will have two separate
>> experiences that help us to move further in choosing the right testing
>> framework.
>
>
> So shall we move next step now? That is to say, integrate TestNG, 
> define the
> annotation(George has given the first version:) ) .
>
> If no one objects, I volunteer to have a try on nio module. :)
> Thanks!
>
> Any thoughts, objections?


Hi Andrew,

I thought that Oliver had volunteered me to do it :-)

It would be terrific if you were happy to proceed with this trial on 
NIO. Please note that if you intend to use the TestNG annotations 
approach then you will need to wait for a 5.0 VM for Harmony.

Best regards,
George


>>
>> Thanks,
>>
>> 2006/7/18, Andrew Zhang <zhanghuangzhu@gmail.com>:
>> > On 7/18/06, George Harley <george.c.harley@googlemail.com> wrote:
>> > >
>> > > Oliver Deakin wrote:
>> > > > George Harley wrote:
>> > > >> <SNIP!>
>> > > >>
>> > > >> Here the annotation on MyTestClass applies to all of its test
>> methods.
>> > > >>
>> > > >> So what are the well-known TestNG groups that we could define
for
>> use
>> > > >> inside Harmony ? Here are some of my initial thoughts:
>> > > >>
>> > > >>
>> > > >> * type.impl  --  tests that are specific to Harmony
>> > > >
>> > > > So tests are implicitly API unless specified otherwise?
>> > > >
>> > > > I'm slightly confused by your definition of impl tests as "tests
>> that
>> > > are
>> > > > specific to Harmony". Does this mean that impl tests are only
>> > > > those that test classes in org.apache.harmony packages?
>> > > > I thought that impl was our way of saying "tests that need to 
>> go on
>> > > > the bootclasspath".
>> > > >
>> > > > I think I just need a little clarification...
>> > > >
>> > >
>> > > Hi Oliver,
>> > >
>> > > I was using the definition of implementation-specific tests that we
>> > > currently have on the Harmony testing conventions web page. That is,
>> > > implementation-specific tests are those that are dependent on some
>> > > aspect of the Harmony implementation and would therefore not pass 
>> when
>> > > run against the RI or other conforming implementations. It's
>> orthogonal
>> > > to the classpath/bootclasspath issue.
>> > >
>> > >
>> > > >> * state.broken.<platform id>  --  tests bust on a specific

>> platform
>> > > >>
>> > > >> * state.broken  --  tests broken on every platform but we want
to
>> > > >> decide whether or not to run from our suite configuration
>> > > >>
>> > > >> * os.<platform id>  --  tests that are to be run only on
the
>> > > >> specified platform (a test could be member of more than one of
>> these)
>> > > >
>> > > > And the defaults for these are an unbroken state and runs on any
>> > > > platform.
>> > > > That makes sense...
>> > > >
>> > > > Will the platform ids be organised in a similar way to the 
>> platform
>> ids
>> > > > we've discussed before for organisation of native code [1]?
>> > > >
>> > >
>> > > The actual string used to identify a particular platform can be
>> whatever
>> > > we want it to be, just so long as we are consistent. So, yes, the 
>> ids
>> > > mentioned in the referenced email would seem a good starting 
>> point. Do
>> > > we need to include a 32-bit/64-bit identifier ?
>> > >
>> > >
>> > > > So all tests are, by default, in an all-platforms (or shared) 
>> group.
>> > > > If a test fails on all Windows platforms, it is marked with
>> > > > state.broken.windows.
>> > > > If a test fails on Windows but only on, say, amd hardware,
>> > > > it is marked state.broken.windows.amd.
>> > > >
>> > >
>> > > Yes. Agreed.
>> > >
>> > >
>> > > > Then when you come to run tests on your windows amd machine,
>> > > > you want to include all tests in the all-platform (shared) group,
>> > > > os.windows and os.windows.amd, and exclude all tests in
>> > > > the state.broken, state.broken.windows and 
>> state.broken.windows.amd
>> > > > groups.
>> > > >
>> > > > Does this tally with what you were thinking?
>> > > >
>> > >
>> > > Yes, that is the idea.
>> > >
>> > >
>> > > >>
>> > > >>
>> > > >> What does everyone else think ? Does such a scheme sound 
>> reasonable
>> ?
>> > > >
>> > > > I think so - it seems to cover our current requirements. Thanks 
>> for
>> > > > coming up with this!
>> > > >
>> > >
>> > > Thanks, but I don't see it as final yet really. It would be great to
>> > > prove the worth of this by doing a trial on one of the existing
>> modules,
>> > > ideally something that contains tests that are platform-specific.
>> >
>> >
>> > Hello George, how about doing a trial on NIO module?
>> >
>> > So far as I know, there are several platform dependent tests in NIO
>> module.
>> > :)
>> >
>> > The assert statements are commented out in these tests, with "FIXME"
>> mark.
>> >
>> > Furthurmore, I also find some platform dependent behaviours of
>> FileChannel.
>> > If TestNG is applied on NIO, I will supplement new tests for 
>> FileChannel
>> and
>> > fix the bug of source code.
>> >
>> > What's your opnion? Any suggestions/comments?
>> >
>> > Thanks!
>> >
>> > Best regards,
>> > > George
>> > >
>> > >
>> > > > Regards,
>> > > > Oliver
>> > > >
>> > > > [1]
>> > > >
>> > >
>> http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200605.mbox/%3c44687AAA.5080302@googlemail.com%3e

>>
>> > > >
>> > > >
>> > > >>
>> > > >> Thanks for reading this far.
>> > > >>
>> > > >> Best regards,
>> > > >> George
>> > > >>
>> > > >>
>> > > >>
>> > > >> George Harley wrote:
>> > > >>> Hi,
>> > > >>>
>> > > >>> Just seen Tim's note on test support classes and it really

>> caught
>> my
>> > > >>> attention as I have been mulling over this issue for a little
>> while
>> > > >>> now. I think that it is a good time for us to return to the

>> topic
>> of
>> > > >>> class library test layouts.
>> > > >>>
>> > > >>> The current proposal [1] sets out to segment our different
types
>> of
>> > > >>> test by placing them in different file locations. After 
>> looking at
>> > > >>> the recent changes to the LUNI module tests (where the layout
>> > > >>> guidelines were applied) I have a real concern that there
are
>> > > >>> serious problems with this approach. We have started down
a 
>> track
>> of
>> > > >>> just continually growing the number of test source folders
as 
>> new
>> > > >>> categories of test are identified and IMHO that is going to

>> bring
>> > > >>> complexity and maintenance issues with these tests.
>> > > >>>
>> > > >>> Consider the dimensions of tests that we have ...
>> > > >>>
>> > > >>> API
>> > > >>> Harmony-specific
>> > > >>> Platform-specific
>> > > >>> Run on classpath
>> > > >>> Run on bootclasspath
>> > > >>> Behaves different between Harmony and RI
>> > > >>> Stress
>> > > >>> ...and so on...
>> > > >>>
>> > > >>>
>> > > >>> If you weigh up all of the different possible permutations
and
>> then
>> > > >>> consider that the above list is highly likely to be extended
as
>> > > >>> things progress it is obvious that we are eventually heading
for
>> > > >>> large amounts of related test code scattered or possibly
>> duplicated
>> > > >>> across numerous "hard wired" source directories. How 
>> maintainable
>> is
>> > > >>> that going to be ?
>> > > >>>
>> > > >>> If we want to run different tests in different configurations

>> then
>> > > >>> IMHO we need to be thinking a whole lot smarter. We need to
be
>> > > >>> thinking about keeping tests for specific areas of functionality
>> > > >>> together (thus easing maintenance); we need something quick
and
>> > > >>> simple to re-configure if necessary (pushing whole 
>> directories of
>> > > >>> files around the place does not seem a particularly lightweight
>> > > >>> approach); and something that is not going to potentially

>> mess up
>> > > >>> contributed patches when the file they patch is found to have

>> been
>> > > >>> recently pushed from source folder A to B.
>> > > >>>
>> > > >>> To connect into another recent thread, there have been some

>> posts
>> > > >>> lately about handling some test methods that fail on Harmony
and
>> > > >>> have meant that entire test case classes have been excluded
from
>> our
>> > > >>> test runs. I have also been noticing some API test methods
that
>> pass
>> > > >>> fine on Harmony but fail when run against the RI. Are the
>> different
>> > > >>> behaviours down to errors in the Harmony implementation ?
An 
>> error
>> > > >>> in the RI implementation ? A bug in the RI Javadoc ? Only
after
>> some
>> > > >>> investigation has been carried out do we know for sure. That

>> takes
>> > > >>> time. What do we do with the test methods in the meantime
? 
>> Do we
>> > > >>> push them round the file system into yet another new source

>> folder
>> ?
>> > > >>> IMHO we need a testing strategy that enables such "problem"
>> methods
>> > > >>> to be tracked easily without disruption to the rest of the
other
>> > > tests.
>> > > >>>
>> > > >>> A couple of weeks ago I mentioned that the TestNG framework
[2]
>> > > >>> seemed like a reasonably good way of allowing us to both group
>> > > >>> together different kinds of tests and permit the exclusion
of
>> > > >>> individual tests/groups of tests [3]. I would like to strongly
>> > > >>> propose that we consider using TestNG as a means of providing

>> the
>> > > >>> different test configurations required by Harmony. Using a
>> > > >>> combination of annotations and XML to capture the kinds of
>> > > >>> sophisticated test configurations that people need, and that
>> allows
>> > > >>> us to specify down to the individual method, has got to be
more
>> > > >>> scalable and flexible than where we are headed now.
>> > > >>>
>> > > >>> Thanks for reading this far.
>> > > >>>
>> > > >>> Best regards,
>> > > >>> George
>> > > >>>
>> > > >>>
>> > > >>> [1]
>> > > >>>
>> > >
>> http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html 
>>
>> > > >>>
>> > > >>> [2] http://testng.org
>> > > >>> [3]
>> > > >>>
>> > >
>> http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200606.mbox/%3c44A163B3.6080005@googlemail.com%3e

>>
>>
>> > --
>> > Andrew Zhang
>> > China Software Development Lab, IBM
>> >
>> >
>>
>>
>> -- 
>> Alexei Zakharov,
>> Intel Middleware Product Division
>>
>> ---------------------------------------------------------------------
>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>
>>
>
>


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Mime
View raw message