ant-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dirk Weigenand <>
Subject regarding unit testing
Date Thu, 05 Oct 2000 09:21:19 GMT
Hi list,

i'm in the process of setting up a test environment for our project. As it is
Java based i want to use ant. So i have a couple of questions and suggestions.

1. Setting up for a part of our tests will require that an instance of our
   program be started and stopped after the test are run.

   This would require the addition of one or two nested elements to the JUnit
   task that are responsible for setting up the environment and tearing it
   down afterwards.

2. Some of these test have to be configured. What i mean here is that there
   are situations where you want to get the objects to test from the database
   via an ID which can change from one test run to another. I don't want to
   hard code these IDs in the test but be able to feed them in when the test
   shall be run. I have written an extension to JUnit that does just that. You
   can call setProperty on a test give it the name of a property and an object
   representing its value and the property will be set (following some naming
   conventions of course).

   For this to work with ant i suggest the addition of a nested element to the
   test element which can be used to onfigure a test case:

   <test name="" ...>
     <property name="propertyone" value="1" />

   Of course the called method in the test class should be able to convert the
   value given in value to an object meaningful for the test.

3. Producing reports that summarize how many tests have been run how many
   failures occured and how many tests bailed out. At the moment it is a bit
   difficult to produce such an overall report since a test log is written to
   a separate file for each test.

   I have thought about ways to overcome this problem with ant. One way would
   be to be able to specify that the output of each test run should be
   collected into one file. This would be the easiest solution i think.

   Another way could be to use the scripting task and build up a fileset
   containing the names of the result files of the tests. Using this fileset
   one could write a xsl stylesheet that could be used to combine all test
   reports into one xml file. I regard this as a little overkill.

   We could provide a stylesheet that translates the results of a test run
   into human readable format i.e. a html page that summarizes these results.

Comments are appreciated.


PS: If someone is interested in the code for the configurable testcase i can
send the code.

View raw message