db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jack Klebanoff <klebanoff-de...@sbcglobal.net>
Subject Re: Adding new documentation for debugging test failures in the test framework
Date Thu, 21 Apr 2005 17:40:25 GMT
I noticed two typos:

        <title>Steps to be followed</title>        
      <p> 1. Frist the test/s have to be run. The details fro running the tests can
be found at

"First" is mispelled "Frist". "For" is mispelled "fro".

I think that you should also suggest looking in the derby.log file when 
you cannot understand the diffs. It is found in mytest/derby.log.

Jack Klebanoff
Shreyas Kaushik wrote:

> Hi all,
> Here is the initial draft as per Apache Forrest 0.6. Please review 
> this doc and let me know of improvements.
> When I was testing this I was unable to see the Samples tab in the web 
> site I built.The build went through successfully but,
> I had to type the link in the browser window to view the page, this 
> despite adding the following entry in
> ${derby.site.root}src/documentation/content/xdocs/site.xml,
> <samples label="Samples" href="samples/" tab="samples">
>      <debugtest label="Debugging test failures" href="debugtest.html" 
> description="Debugging Test Failures"/>
> </samples>
> Should I do anything more to see the tab?
> ~ Shreyas
> Jean T. Anderson wrote:
>> Hi, Shreyas,
>> Writeups are so very much appreciated!  Especially writeups that can 
>> be integrated into the derby web site with a minimum of fuss.
>> For adding new content to the derby web site, the writeup below is 
>> intended to help people test new content and also help committers 
>> understand how to commit changes:
>> http://incubator.apache.org/derby/papers/derby_web.html
>> Currently the site uses forrest 0.6. The forrest project will release 
>> 0.7 soon, so I opened a task to upgrade it to 0.7 -- see 
>> http://issues.apache.org/jira/browse/DERBY-188 -- but I haven't 
>> started looking at 0.7 yet.
>> For improvements to the DITA doc source, see 
>> http://incubator.apache.org/derby/manuals/dita.html .
>> regards,
>>  -jean
>> Shreyas Kaushik wrote:
>>> Hi all,
>>>   As people on this alias might know there was a thread running 
>>> where we discussed about debugging the test failures in the Derby 
>>> harness. I plan to do a write up of my learnings in the process ( 
>>> also has some valuable suggestions from Kathey ).
>>> I was wondering where to start? Things like,
>>> ~ How do I work with the new Apache Forrest ? Is it like adding 
>>> stuff another HTML document ?
>>> ~ Where to find the actaul docs source to start playing with ?
>>> ~ What section to put this write up in ?
>>> ~ How do I test my write up ? ( Formatting, font size..etc )
>>> Any pointers on these would help. I hope this document will a  be a 
>>> good one for beginners and people not so familiar with the Derby 
>>> test framework ( I am also learning ).
>>> ~ Shreyas
><?xml version="1.0"?>
>  <!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
>  <document> 
>    <header> 
>      <title>Debugging test failures with the Derby test framework</title>
>      <abstract>This document gives details of how to debug test failures. This
>       targeted at developers who contribute code to Derby and would want to
>       investigate test failures caused by their fixes. lease post questions, comments,

>       and corrections to derby-dev@db.apache.org. </abstract>
>    </header>
>    <body> 
>      <section id="introduction"> 
>        <title>Introduction</title>
>        <note>The contents in this document are mostly inputs I received from Kathey
>        </note>
>      <p> The Derby codebase has a slightly complicated test framework suite. Although
 using the
>      framewrok to run tests is very simple and the framework itself give extensive results
of the
>      tests that passed and failed, it does get really tough to debug these test failures.
>      following sections give a step by step insight into debugging test failure.</p>
>      </section>    
>      <section> 
>        <title>Steps to be followed</title>        
>      <p> 1. Frist the test/s have to be run. The details fro running the tests
can be found at
>          ${derby.source}/java/testing/README.htm.
>          The command for running the test/s would something like this,</p>
>          <source>
>          java  -Dij.exceptionTrace=true -Dkeepfiles=true org.apache.derbyTesting.functionTests.harness.RunTest
>          </source>
>          <p>For the netwrok server you would need to add the following property
-Dframework=DerbyNet </p>
>       <p> 2. Do a visual diff of the ouptut with the canon. 
>           It will give you more context in approaching the problem.
>           In the test output directory you will see "mytest".out (filitered test output)
>           "mytest".tmp (unfiltered test output - this one should have a trace). 
>           To get the most information, it is advised to diff the tmp file with the canon
 which is checked in under 
>           java/testing/org/apache/derbyTesting/functionTests/master/ or appropriate framework
or jdk subdirectory.</p>
>       <p> 3. Identify the sql statement or java code causing the diff.
>           For sql scripts this is usually pretty obvious. For java programs you have
to look at the test source to 
>           figure out what is going on.</p>
>       <p> 4. Evaluate the diff.  
>              Here of course starts the tricky and interesting part.
>	   Look carefully at the sql command or java code that caused the diff. 
>   	   Think about what it should do and how it relates to your change.
>   	   Decide what you think the right behaviour should be.
>              a) The old behaviour
>              b) The new behaviour
>              c) something else
>	   If you have a trace, look at the the trace and see if it holds any clue.
>	   If you are lucky it passes right through that code you changed.
>	   Often it is helpful to put that one sql command or bit of java code in a separate
script or stand alone program, 
>	   so you can evaluate it independently outside of the harness and evaluate in the debugger.
> 	   Common results of the evaluation phase are:
>        	a) An error in your code.
>	        b) Someone elses test failure all together. It is good to keep  a clean client
for testing this. 
>                c) A master update.  
>                   Be careful with this one. make sure you have a valid reason to update
a master.  
>                   Be especially cognizant of backward compatibility.  We don't want any
real or perceived regressions to catch
>		   us by surprise.</p> 
>	   <note>		   
>           Two good questions to ask your self and then answer when you post your patch.
>	        1) What is my valid reason for updating this master.
> 	        2) Might someone be surprised by this difference and perceive it as a regression?
> 	   </note>
> 	<p> 5. Resolve the issue. 
>    	       Here are some possible actions based on your evaluation in step 4
>	            a) An error in your code.
>        		Go fix it !!!
>		    b) Someone else's test failure all together. 
>		       Look at the recent checkins and try to guess the culprit and post.
>		    c) A master update.
>	               Update the master  and  make sure you include an explanation in your patch.
>	<p> 6. Reporting your findings.
>	    If you get stuck along the way, please post to the list, but make sure you include.
>		a) The small bit of sql or java code in question. 
>		b) A description of the old and new behaviour and what you think the right behavior
should be.
>		c) The stack trace if there is one.
>		d) What you have learned so far from your own evaluation of 1-3 above.
>		e) A specific question that is not going to take a lot of research for someone to answer
>		   you think might send you back along your way if answered.</p> 	           
>      </section>      
>    </body>
>    <footer> 
>      <legal></legal>
>    </footer>
>  </document>

View raw message