cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hunsberger, Peter" <>
Subject RE: [ANN] XInclude processor for xml-commons
Date Tue, 06 May 2003 20:25:36 GMT
J.Pietschmann <> asked:

> Hunsberger, Peter wrote:
> > I think it's possible we're going to have to end up 
> re-evaluating SAX 
> > someday in any case....  The pull semantics don't work, you need 
> > something that supports a DOM like traversal model (or better an 
> > abstract Xquery type of traversal?).
> What's wrong with an XQuery generator which spews SAX events? 
> SAX is problematic if you want to attach PSVI-like thingies 
> like datatypes to XML nodes, or if you want to do a sort of 
> attaching a schema to the XML document represented by an event stream.

Probably nothing, that would be the way to give legacy Cocoon users
something that they can keep on using.  However, I suspect that gets you
into a procedural mode of thinking: go get the results for this Xquery, then
the next results, then the next results until I've "pulled" everything out
of the XML I need. 

Thinking about it, Xquery traversal doesn't work for the XSLT world because
all they have is Xpath. Again this hinges on whether or not your
transformation language is XSLT or something else.  If it's XSLT then a SAX
adapter gains you nothing (at the point you feed the XSLT transformation).
Of course it doesn't really hinder you either, except maybe in terms of

However, there's a more subtle issue: if you're really going to have an
efficient lazy evaluation model database feeding anything then you're going
to need a schema.  I'm pretty much convinced that you can't have generalized
automatic space/time optimization unless you're willing to give the
"optimizer" hints about how to do it's job and in the XML world that will
have to be some form of schema.  The more annotated it is, the better
optimization you can get.  The more information your data extraction model
can pass into the lazy evaluator type logic the more it knows about what
parts of the data stream it needs to evaluate. Thus, sure, SAX can always
work; but will it provide you automagic capabilities of being able to
efficiently traverse your data without any other work?  I don't think so...

I don't have a complete vision of how this will all work, but eventually I
think people will realize that if you have a schema hanging around you can
automatically generate templates, automatically generate validation and you
can automatically generate optimized data traversal.  I expect the hooks
into all these types of capabilities are going to require much more complete
API's then SAX provides.

View raw message