harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From George Harley <george.c.har...@googlemail.com>
Subject Re: [classlib] Testing conventions - a proposal
Date Wed, 26 Jul 2006 10:35:39 GMT
Alexei Zakharov wrote:
> Hi George,
>
> Sorry for the late reply.
>

Hi Alexei,

Not a problem. Especially when my reply to you is even later (sorry).


>> It looks like you are using an "os.any" group for those test methods
>> (the majority) which may be run anywhere. That's a different approach to
>> what I have been doing. I have been thinking more along the lines of
>> avoiding the creation of groups that cover the majority of tests and
>> trying to focus on groups that identify those "edge cases" like
>> platform-specific, temporarily broken, temporarily broken on platform
>> "os.blah" etc. This means my tests that are "run anywhere" and are
>> "public api" type (as opposed to being specific to the Harmony
>> implementation) are just annotated with "@Test". I guess that the
>> equivalent in your scheme would be annotated as "@Test(groups =
>> {"os.any", "type.api"})" ?
>
> Well, in general I like the idea of having standalone @Test that
> denotes something, for example "os.any & type.api". The simpler is
> better – as you have already said. But by the moment of writing of my
> previous message I didn't know how to implement this idea technically.
> TestNG just filters away all tests that don't have the group attribute
> if the group "include" filter is specified. It seems Richard had the
> same problem.

Right. That was the point when the BeanShell option began to look good 
to me.


> This is why the group "os.any" has appeared in my
> script. After a few experiments I've realized we can avoid using the
> include filter and use only "excludeGroups" instead. In that way, we
> may modify the above script:
>
> <condition property="not_my_platform1" value="os.win.IA32">
> <not><os family="Windows"/></not>
> </condition>
> <condition property="not_my_platform2" value="os.linux.IA32">
> <not><and>
> <os name="linux"/>
> <os family="unix"/>
> </and></not>
> </condition>
> <condition property="not_my_platform3" value="os.mac">
> <not><os family="mac"/></not>
> </condition>
>
> <property name="not_my_platform1" value=""/>
> <property name="not_my_platform2" value=""/>
> <property name="not_my_platform3" value=""/>
> <property name="not_my_platforms"
> value="${not_my_platform1},${not_my_platform2},${not_my_platform3}"/>
>
> <target name="run" description="Run tests">
> <taskdef name="testng" classname="org.testng.TestNGAntTask"
> classpath="${jdk15.testng.jar}"/>
> <testng classpathref="run.cp"
> outputdir="${testng.report.dir}"
> excludedGroups="state.broken.*,${not_my_platforms}"
> enableAssert="false"
> jvm="${test.jvm}">
> <classfileset dir="." includes="**/*.class"/>
> </testng>
> </target>
>
> All tests marked with simple "@Test" will be included in the test run.
> However, this script IMHO is less elegant than the first one and
> "@Test (groups="os.any")" probably is more self explanatory than
> simple "@Test". But we will save our time and reduce the size of
> resulting source code using simple "@Test".
> Any thoughts?
>
> Regards,
>

I spent some time a few days ago investigating your earlier idea that 
used the "os.any" group and really liked the simplicity it brought to 
the Ant script as well as removing the need for a TestNG XML file to 
define the tests. As you say, the exclude-only approach set out in your 
more recent post is not as elegant. My vote would be for using your 
first approach and using the "os.any" group.

While I personally don't have a hang up with delegating what gets tested 
to a separate artefact like a testng.xml file, it is one more file 
format to learn and (if BeanShell gets used inside) one more "language" 
required. Your "os.any" approach keeps the whole test narrative firmly 
within the Ant file which is more familiar to us all and so that bit 
easier to maintain. I'm not completely off the idea of using a 
testng.xml file but think that its introduction should be kept until we 
*really* need it.

Best regards,
George

>
> 2006/7/20, George Harley <george.c.harley@googlemail.com>:
>> Alexei Zakharov wrote:
>> > George,
>> >
>> > I remember my past experience with BeanShell - I was trying to create
>> > the custom BeanShell task for ant 1.6.1. I can't say I haven't
>> > succeeded. But I remember this as a rather unpleasant experience. At
>> > that time BeanShell appeared to me as a not very well tested
>> > framework. Please don't throw rocks on me now, I am just talking about
>> > my old impressions. Probably BeanShell has become better since then.
>> >
>>
>> Hi Alexei,
>>
>> No rocks. I promise :-)
>>
>>
>> > But... Do we really need BS here? Why can't we manage everything from
>> > build.xml without extra testng.xml files? I mean something like this:
>> >
>> > <!-- determines the OS -->
>> > <condition property="platform" value="win.IA32">
>> > <os family="Windows"/>
>> > </condition>
>> > <condition property="platform" value="linux.IA32">
>> > <and>
>> > <os name="linux"/>
>> > <os family="unix"/>
>> > </and>
>> > </condition>
>> >
>> > <property name="groups.included" value="os.any, os.${platform}"/>
>> > <property name="groups.excluded" value="state.broken,
>> > state.broken.${platform}"/>
>> >
>> > <target name="run" description="Run tests">
>> > <taskdef name="testng" classname="org.testng.TestNGAntTask"
>> > classpath="${jdk15.testng.jar}"/>
>> > <testng classpathref="run.cp"
>> > outputdir="${testng.report.dir}"
>> > groups="${groups.included}"
>> > excludedGroups="${groups.excluded}">
>> > <classfileset dir="." includes="**/*.class"/>
>> > </testng>
>> > </target>
>> >
>> > Does this make sense?
>> >
>> > Thanks,
>>
>> Yes, that makes sense and if it gives the degree of control that we need
>> then I would be all for it. The simpler the better.
>>
>> It looks like you are using an "os.any" group for those test methods
>> (the majority) which may be run anywhere. That's a different approach to
>> what I have been doing. I have been thinking more along the lines of
>> avoiding the creation of groups that cover the majority of tests and
>> trying to focus on groups that identify those "edge cases" like
>> platform-specific, temporarily broken, temporarily broken on platform
>> "os.blah" etc. This means my tests that are "run anywhere" and are
>> "public api" type (as opposed to being specific to the Harmony
>> implementation) are just annotated with "@Test". I guess that the
>> equivalent in your scheme would be annotated as "@Test(groups =
>> {"os.any", "type.api"})" ?
>>
>> If I have inferred correct from your Ant fragment then I think it means
>> requiring more information on the annotations. I'm not throwing rocks at
>> that idea (remember my promise ?), just trying to draw out the
>> differences in our approaches. When I get a chance I will try and
>> explore your idea further.
>>
>> I really appreciate your input here.
>>
>> Best regards,
>> George
>>
>>
>> >
>> > 2006/7/20, George Harley <george.c.harley@googlemail.com>:
>> >> Richard Liang wrote:
>> >> >
>> >> >
>> >> > George Harley wrote:
>> >> >> Richard Liang wrote:
>> >> >>>
>> >> >>>
>> >> >>> George Harley wrote:
>> >> >>>> Hi,
>> >> >>>>
>> >> >>>> If annotations were to be used to help us categorise tests
in 
>> order
>> >> >>>> to simplify the definition of test configurations - what's

>> included
>> >> >>>> and excluded etc - then a core set of annotations would
need 
>> to be
>> >> >>>> agreed by the project. Consider the possibilities that
the 
>> TestNG
>> >> >>>> "@Test" annotation offers us in this respect.
>> >> >>>>
>> >> >>>> First, if a test method was identified as being broken
and 
>> needed
>> >> >>>> to be excluded from all test runs while awaiting 
>> investigation then
>> >> >>>> it would be a simple matter of setting its enabled field
like 
>> this:
>> >> >>>>
>> >> >>>> @Test(enabled=false)
>> >> >>>> public void myTest() {
>> >> >>>> ...
>> >> >>>> }
>> >> >>>>
>> >> >>>> Temporarily disabling a test method in this way means that
it 
>> can
>> >> >>>> be left in its original class and we do not have to refer
to 
>> it in
>> >> >>>> any suite configuration (e.g. in the suite xml file).
>> >> >>>>
>> >> >>>> If a test method was identified as being broken on a specific
>> >> >>>> platform then we could make use of the groups field of
the 
>> "@Test"
>> >> >>>> type by making the method a member of a group that identifies

>> its
>> >> >>>> predicament. Something like this:
>> >> >>>>
>> >> >>>> @Test(groups={"state.broken.win.IA32"})
>> >> >>>> public void myOtherTest() {
>> >> >>>> ...
>> >> >>>> }
>> >> >>>>
>> >> >>>> The configuration for running tests on Windows would then
>> >> >>>> specifically exclude any test method (or class) that was
a 
>> member
>> >> >>>> of that group.
>> >> >>>>
>> >> >>>> Making a test method or type a member of a well-known group
>> >> >>>> (well-known in the sense that the name and meaning has
been 
>> agreed
>> >> >>>> within the project) is essentially adding some descriptive
>> >> >>>> attributes to the test. Like adjectives (the groups) and

>> nouns (the
>> >> >>>> tests) in the English language. To take another example,
if 
>> there
>> >> >>>> was a test class that contained methods only intended to
be 
>> run on
>> >> >>>> Windows and that were all specific to Harmony (i.e. not
API 
>> tests)
>> >> >>>> then one could envisage the following kind of annotation:
>> >> >>>>
>> >> >>>>
>> >> >>>> @Test(groups={"type.impl", "os.win.IA32"})
>> >> >>>> public class MyTestClass {
>> >> >>>>
>> >> >>>> public void testOne() {
>> >> >>>> ...
>> >> >>>> }
>> >> >>>>
>> >> >>>> public void testTwo() {
>> >> >>>> ...
>> >> >>>> }
>> >> >>>>
>> >> >>>> @Test(enabled=false)
>> >> >>>> public void brokenTest() {
>> >> >>>> ...
>> >> >>>> }
>> >> >>>> }
>> >> >>>>
>> >> >>>> Here the annotation on MyTestClass applies to all of its
test
>> >> methods.
>> >> >>>>
>> >> >>>> So what are the well-known TestNG groups that we could
define 
>> for
>> >> >>>> use inside Harmony ? Here are some of my initial thoughts:
>> >> >>>>
>> >> >>>>
>> >> >>>> * type.impl -- tests that are specific to Harmony
>> >> >>>>
>> >> >>>> * state.broken.<platform id> -- tests bust on a specific

>> platform
>> >> >>>>
>> >> >>>> * state.broken -- tests broken on every platform but we
want to
>> >> >>>> decide whether or not to run from our suite configuration
>> >> >>>>
>> >> >>>> * os.<platform id> -- tests that are to be run only
on the
>> >> >>>> specified platform (a test could be member of more than
one of
>> >> these)
>> >> >>>>
>> >> >>>>
>> >> >>>> What does everyone else think ? Does such a scheme sound
>> >> reasonable ?
>> >> >>>>
>> >> >>> Just one question: What's the default test annotation? I mean
the
>> >> >>> successful api tests which will be run on every platform. Thanks
>> >> a lot.
>> >> >>>
>> >> >>> Best regards,
>> >> >>> Richard
>> >> >>
>> >> >> Hi Richard,
>> >> >>
>> >> >> I think that just the basic @Test annotation on its own will 
>> suffice.
>> >> >> Any better suggestions are welcome.
>> >> >>
>> >> > Just thinking about how to filter out the target test groups :-)
>> >> >
>> >> > I tried to use the following groups to define the win.IA32 API 
>> tests,
>> >> > but it seems that the tests with the default annotation @Test 
>> cannot
>> >> > be selected. Do I miss anything? Thanks a lot.
>> >> >
>> >> > <groups>
>> >> > <run>
>> >> > <include name=".*" />
>> >> > <include name="os.win.IA32" />
>> >> > <exclude name="type.impl" />
>> >> > <exclude name="state.broken" />
>> >> > <exclude name="state.broken.win.IA32" />
>> >> > <exclude name="os.linux.IA32" />
>> >> > </run>
>> >> > </groups>
>> >> >
>> >> > The groups I defined:
>> >> > @Test
>> >> > @Test(groups={"os.win.IA32"})
>> >> > @Test(groups={"os.win.IA32", "state.broken.win.IA32"})
>> >> > @Test(groups={"type.impl"})
>> >> > @Test(groups={"state.broken"})
>> >> > @Test(groups={"os.linux.IA32"})
>> >> > @Test(groups={"state.broken.linux.IA32"})
>> >> >
>> >> > Best regards,
>> >> > Richard.
>> >>
>> >> Hi Richard,
>> >>
>> >> Infuriating isn't it ?
>> >>
>> >> The approach I have adopted so far is to aim for a single testng.xml
>> >> file per module that could be used for all platforms that we run 
>> tests
>> >> on. The thought of multiple testng.xml files for each module, with 
>> each
>> >> XML file including platform-specific data duplicated across the files
>> >> (save for a few platform identifiers) seemed less than optimal.
>> >>
>> >> So how do we arrive at this single testng.xml file with awareness 
>> of its
>> >> runtime platform ? And how can that knowledge be applied in the 
>> file to
>> >> filter just the particular test groups that we want ? Well, the 
>> approach
>> >> that seems to work best for me so far is to use the make use of some
>> >> BeanShell script in which we can detect the platform id as a system
>> >> property and then use that inside some pretty straightforward
>> >> Java/BeanShell code to select precisely the groups we want to run 
>> in a
>> >> particular test.
>> >>
>> >> For example, in the following Ant fragment we use the testng task to
>> >> launch the tests pointing at a specific testng.xml file
>> >> (testng-with-beanshell.xml) and also setting the platform 
>> identifier as
>> >> a system property hy.platform. In real life the value used to set the
>> >> notional "hy.platform" property would be some agreed Ant property 
>> using
>> >> a project-wide means of agreeing platform identities ...
>> >>
>> >>
>> >> <testng jvm="${test.jre.home}/bin/java">
>> >> <classpath location="${test.build.dir}"/>
>> >> <jvmarg value="-showversion" />
>> >> <jvmarg value="-Dhy.platform=win.IA32"/>
>> >>
>> >> <xmlfileset dir="${basedir}"
>> >> includes="testng-with-beanshell.xml"/>
>> >> <classfileset dir="${test.build.dir}"
>> >> includes="**/*Test.class" />
>> >> </testng>
>> >>
>> >>
>> >> The testng-with-beanshell.xml file contains a suite with several 
>> tests
>> >> defined. Let's consider just the test that will run all of the API 
>> tests
>> >> specific to our current platform (I think that this is what you 
>> want to
>> >> run in your setup)...
>> >>
>> >>
>> >> <test name="current.platform.api">
>> >> <method-selectors>
>> >> <method-selector>
>> >> <script language="beanshell"><![CDATA[
>> >> import apples.*;
>> >>
>> >> platform = System.getProperty("hy.platform", "null");
>> >> if (platform.equals("null")) {
>> >> System.out.println("Property hy.platform not set");
>> >> return false;
>> >> }
>> >>
>> >> if ( !groups.containsKey("type.impl") &&
>> >> TestUtils.matchesAnyGroup(testngMethod.getGroups(),
>> >> "os." + TestUtils.getPlatform()) &&
>> >> !groups.containsKey("state.broken") &&
>> >> !groups.containsKey("state.broken." + platform) ) {
>> >> return true;
>> >> }
>> >> return false;
>> >>
>> >> ]]></script>
>> >> </method-selector>
>> >> </method-selectors>
>> >> <packages>
>> >> <package name="foo.bar"/>
>> >> </packages>
>> >>
>> >> </test>
>> >>
>> >>
>> >> There is some simple experimental code that I have omitted here (my
>> >> apples.TestUtils class) that does some simple pattern matching stuff.
>> >> Originally the code in the static "matchesAnyGroup" method was just
>> >> defined in the CDATA script block as a BeanShell function. I just 
>> moved
>> >> it into a separate Java class because I saw some reuse opportunity.
>> >> Likewise the code to set the "platform" string variable could be 
>> split
>> >> out into a static TestUtils method called getPlatform().
>> >>
>> >> Anyway, the point I guess that I am trying to make here is that it is
>> >> possible in TestNG to select the methods to test dynamically using a
>> >> little bit of scripting that (a) gives us a lot more power than the
>> >> include/exclude technique and (b) will work the same across every
>> >> platform we test on. Because BeanShell allows us to instantiate 
>> and use
>> >> Java objects of any type on the classpath then the possibility of 
>> using
>> >> more than just group membership to decide on tests to run becomes
>> >> available to us. Please refer to the TestNG documentation for more on
>> >> the capabilities of BeanShell and the TestNG API. I had never 
>> heard of
>> >> it before never mind used it but still managed to get stuff 
>> working in a
>> >> relatively short space of time.
>> >>
>> >> I hope this helps. Maybe I need to write a page on the wiki or
>> >> something ?
>> >>
>> >> Best regards,
>> >> George
>> >>
>> >>
>> >>
>> >> >> Best regards,
>> >> >> George
>> >> >>
>> >> >>
>> >> >>
>> >> >>>> Thanks for reading this far.
>> >> >>>>
>> >> >>>> Best regards,
>> >> >>>> George
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>> George Harley wrote:
>> >> >>>>> Hi,
>> >> >>>>>
>> >> >>>>> Just seen Tim's note on test support classes and it
really 
>> caught
>> >> >>>>> my attention as I have been mulling over this issue
for a 
>> little
>> >> >>>>> while now. I think that it is a good time for us to
return 
>> to the
>> >> >>>>> topic of class library test layouts.
>> >> >>>>>
>> >> >>>>> The current proposal [1] sets out to segment our different

>> types
>> >> >>>>> of test by placing them in different file locations.
After 
>> looking
>> >> >>>>> at the recent changes to the LUNI module tests (where
the 
>> layout
>> >> >>>>> guidelines were applied) I have a real concern that
there are
>> >> >>>>> serious problems with this approach. We have started
down a 
>> track
>> >> >>>>> of just continually growing the number of test source

>> folders as
>> >> >>>>> new categories of test are identified and IMHO that
is going to
>> >> >>>>> bring complexity and maintenance issues with these
tests.
>> >> >>>>>
>> >> >>>>> Consider the dimensions of tests that we have ...
>> >> >>>>>
>> >> >>>>> API
>> >> >>>>> Harmony-specific
>> >> >>>>> Platform-specific
>> >> >>>>> Run on classpath
>> >> >>>>> Run on bootclasspath
>> >> >>>>> Behaves different between Harmony and RI
>> >> >>>>> Stress
>> >> >>>>> ...and so on...
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> If you weigh up all of the different possible permutations
and
>> >> >>>>> then consider that the above list is highly likely
to be 
>> extended
>> >> >>>>> as things progress it is obvious that we are eventually
heading
>> >> >>>>> for large amounts of related test code scattered or
possibly
>> >> >>>>> duplicated across numerous "hard wired" source directories.
How
>> >> >>>>> maintainable is that going to be ?
>> >> >>>>>
>> >> >>>>> If we want to run different tests in different 
>> configurations then
>> >> >>>>> IMHO we need to be thinking a whole lot smarter. We
need to be
>> >> >>>>> thinking about keeping tests for specific areas of

>> functionality
>> >> >>>>> together (thus easing maintenance); we need something
quick and
>> >> >>>>> simple to re-configure if necessary (pushing whole

>> directories of
>> >> >>>>> files around the place does not seem a particularly
lightweight
>> >> >>>>> approach); and something that is not going to potentially

>> mess up
>> >> >>>>> contributed patches when the file they patch is found
to 
>> have been
>> >> >>>>> recently pushed from source folder A to B.
>> >> >>>>>
>> >> >>>>> To connect into another recent thread, there have been
some 
>> posts
>> >> >>>>> lately about handling some test methods that fail on
Harmony 
>> and
>> >> >>>>> have meant that entire test case classes have been
excluded 
>> from
>> >> >>>>> our test runs. I have also been noticing some API test
methods
>> >> >>>>> that pass fine on Harmony but fail when run against
the RI. Are
>> >> >>>>> the different behaviours down to errors in the Harmony
>> >> >>>>> implementation ? An error in the RI implementation
? A bug 
>> in the
>> >> >>>>> RI Javadoc ? Only after some investigation has been
carried 
>> out do
>> >> >>>>> we know for sure. That takes time. What do we do with
the test
>> >> >>>>> methods in the meantime ? Do we push them round the
file system
>> >> >>>>> into yet another new source folder ? IMHO we need a
testing
>> >> >>>>> strategy that enables such "problem" methods to be
tracked 
>> easily
>> >> >>>>> without disruption to the rest of the other tests.
>> >> >>>>>
>> >> >>>>> A couple of weeks ago I mentioned that the TestNG framework
[2]
>> >> >>>>> seemed like a reasonably good way of allowing us to
both group
>> >> >>>>> together different kinds of tests and permit the exclusion
of
>> >> >>>>> individual tests/groups of tests [3]. I would like
to strongly
>> >> >>>>> propose that we consider using TestNG as a means of

>> providing the
>> >> >>>>> different test configurations required by Harmony.
Using a
>> >> >>>>> combination of annotations and XML to capture the kinds
of
>> >> >>>>> sophisticated test configurations that people need,
and that
>> >> >>>>> allows us to specify down to the individual method,
has got 
>> to be
>> >> >>>>> more scalable and flexible than where we are headed
now.
>> >> >>>>>
>> >> >>>>> Thanks for reading this far.
>> >> >>>>>
>> >> >>>>> Best regards,
>> >> >>>>> George
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> [1]
>> >> >>>>>
>> >> 
>> http://incubator.apache.org/harmony/subcomponents/classlibrary/testing.html 
>>
>> >>
>> >> >>>>>
>> >> >>>>> [2] http://testng.org
>> >> >>>>> [3]
>> >> >>>>>
>> >> 
>> http://mail-archives.apache.org/mod_mbox/incubator-harmony-dev/200606.mbox/%3c44A163B3.6080005@googlemail.com%3e

>>
>> >>
>> >
>> >
>>
>>
>> ---------------------------------------------------------------------
>> Terms of use : http://incubator.apache.org/harmony/mailing.html
>> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
>> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>>
>>
>
>


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Mime
View raw message