xmlgraphics-batik-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From thomas.dewe...@kodak.com
Subject Re: Regard: setup and results
Date Thu, 16 Apr 2009 10:36:54 GMT
Hi Helder, Cameron,

Helder Magalhães <helder.magalhaes@gmail.com> wrote on 04/15/2009 01:39:14 
PM:

> > It would be great if we could get
> > the latest SVG 1.1 test suite run in regard, rather than the BE test
> > suite.
> 
> Yeah, that would be the best, though it sounds like quite a bit of
> effort -- any ideas for the of actions needed to accomplish this (even
> if very drafted/high level)? More related stuff below.

   I don't know how much the test suite has changed but at least
for the 'static' (and some of the animation) tests it's easy to
add them. The majority of the work is in trying to decide if the 
output is correct (although that is really a secondary concern[1]). 

   If you look in 'test-resources/org/apache/batik/test' there are 
a number of xml files that drive the regard run.  The file regard.xml is
the top level file.  It then references samplesRendering.xml and
beSuite.xml.  These files reference a class that subclasses
PreconfiguredRenderingTest and basically just provides path references.
The rest of the file simply sets the file name as the 'id' (it
can include directories).

   Supporting the new test suite should involve copying the existing
BERenderingTest and updating it for the new test suite directories.
Then create a new .xml file that references all of the 'appropriate'
tests from the new test suite.

   Getting the 'interactive' tests and animation tests would be
more work.  Although there is already some support for running
animations/scripts and saving the result at a particular time.

> Related with this I'd propose:
>  * Creating a bug for passing the SVG 1.1 full test suite (for 
tracking);
>  * Start creating bugs for tests currently failing and mark a
> dependency on the other one.

   I don't think it's great to create a bunch of bugs with no one 
on tap to start addressing them in any sort of systematic way.  By 
the time someone does get around to trying to fix them I suspect the 
bug reports will be out of date, and it will be more work to simply
figure out what is legitimate and what isn't.

> >> 2. About a few runs results, I've noticed a few relevant 
> rendering regressions:
> >>  * samples/tests/spec/paints/linearGradientLine.svg (missing most
> >> relevant part of rendering);
> >>  * samples/tests/spec/paints/radialGradientLine.svg (missing most
> >> relevant part of rendering);
> >>  * samples/tests/spec/paints/gradientPoint.svg (missing most relevant
> >> part of rendering).
> >
> > I think that those tests are invalid, since a clarification was made 
to
> > the spec and official test suite about how objectBoundingBox gradients
> > should work when the the bounding box has zero width or height.

   Yes there was a clarification, and the current rendering matches that
clarification, but the test-reference doesn't match the now correct 
rendering.  So I think we just need to update the test-reference.

> So do you believe the tests should be modified or simply
> disabled/removed from regard?

I think the test is still a good test.

> >> Also, minor potential color matching regressions or JVM improvements
> >> (not being familiar with color matching causes me not to tell the
> >> difference):
> [...]
> > I guess these will need careful analysis to determine if the resulting
> > colours are correct.
> 
> Thomas, can you provide any insight here? Or should a bug be created
> for this? (I'm afraid it may get lost within the mailing list...)

   I currently can't use regard because no rendering matches on my
machine.  I've tried build a good set of accepted variations but I
haven't been able to do that for a fairly large subset of the tests
I haven't been able to figure out how to do that (but I haven't had
much time to look into it).

[1] One of the major purposes of regard is to simply notice if
anything changes.  After all always getting 100's of failures makes
it hard to notice when you get 101 failures because of a regression.

Mime
View raw message