xml-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Scott_B...@lotus.com
Subject Re: Test Infrastructure Project Proposal
Date Tue, 13 Feb 2001 03:26:15 GMT

> I think you need to distinguish between what the tool can do
> and what specific tests do.  For example you talk about the
> types of tests (Conformance, API tests) but that's dependent
> on how the tests are written.  I'd rather the list focus on the tool
> itself and making sure that the test tool can run the tests that
> do the things you mentioned.

Where to draw the line is a question.  Dunno.  I would hope that the
infrastructure would have at least helper tools to directly support what we
need to do.  Your very point is why I tend to think that we need something
more specific than what more general tool, such as JUnit, can do.

For Xalan conformance tests:
We have a "conf" directory that is divided up into sub-categories,
attribset, axes, boolean, etc.  Each of these directories have pairs of
XSLT/XML: axes01.xsl, axes01.xml, etc.  We then have a conf-gold directory
that has "gold" files in corresponding directories: axes01.out, etc.  When
we traverse these directories we want to do so without having to specify a
specific traversal index, i.e. traverse those directories and run the
contained tests.  Right now we have home-rolled directory iterators, but it
seems like Ant could cover similar functionality quite nicely.

Where the line is to be drawn I leave to be an open question.  I am trying
to give our requirements for the tasks that need to be done.  In the above
scenario, I think similar functionality is needed by several projects, so
it makes sense to have it in the infrastructure, either directly or
indirectly via Ant.  I'm not trying to design it just at the moment, only
state what we need to do.

For Performance testing, I think this requires a strong sense of a baseline
report, against which another run can be compared.  I am very strong that
performance testing needs to be part of nightly measurements, same as unit
testing, and should be part of the same framework.

API testing is probably closer to a straight-forward unit test, though
there are some interesting questions about input data files... i.e. I
should be able to run a suite of files through the API tests, though the
structure of these may differ significantly from conformance tests.

For negative tests, logging facilities are needed for the error messages (I
think), and gold "error" message files are needed.  I think the end-point
comparison is different than a positive test.

I think this infrastructure should have some specificity to XML processing.
Comparing XML is different than comparing text files.  And comparing HTML
is even harder.  SAXON and Xalan can output two files that will appear
completely different in a text differ, but will be identical in Xalan's DOM
comparing mechanism.  We don't yet have a mechanism to compare HTML output.

I have a feeling that things take another step towards complexity once you
start adding Tomcat and Cocoon to the mix.  And I think there will be other
applications piled on top of these.  I want to make it clear that unit
testing is only one aspect to what I am proposing.  The real challange
comes with pipelines.  Consider error handling in a pipeline, for instance.
And, how to measure latency in a pipeline?

Yes, these all could be project-specific tests, and some of them may well
be.  But the complexity is such that I think the mechanisms that can be
shared should be attacked as a community.


View raw message