cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Roy G. Biv" <>
Subject [RT] MVC Contexts (Was Re: [Design] JXTG 2.0 (generator and transformer, same template syntax?))
Date Thu, 09 Dec 2004 01:53:50 GMT
This one is long folks.  Sorry, can't be helped.

Sylvain Wallez wrote:

> Stefano Mazzocchi wrote:
>> Sure, but the question is: once the syntax starts to get ugly for 
>> both because of changes we made to the language that make sense only 
>> on transformation and not on generation, would it still be the case?
>> remember that the generation stage has a scripted population stage 
>> first, the transformation stage does not!
> I don't see this distinction: if you look at the global processing 
> chain, there is always a population stage in front of the template 
> engine:
> 1/ template as a generator
>  +------------+              +-----------+
>  + Flowscript |--(objects)-->| Template  |----> ...
>  +   data     |              | generator |
>  +------------+              +-----------+
>                                    ^
>                                    |
>                              Template file
> 2/ template as a transformer
>  +-----------+               +-------------+
>  | Generator |--(xml data)-->|  Template   |----> ...
>  |           |               | transformer |
>  +-----------+               +-------------+
>                                     ^
>                                     |
>                               Template file
> Where is the difference? The template lays out some data. In the case 
> of the generator, it's controller-provided data, in the case of the 
> transformer, it's pipeline-produced data. Note that is not limited to 
> a generator, but can also be aggregation, prior transformation, etc, 
> or even some preliminary processing of some controller-provided data.
> <snip/>

Is this indeed the case?  The flow data is, strictly speaking in this 
case, the model?  The template *may or may not* follow the flow data 
structure closely.  In fact, the flow data is merely a hint or an 
indicator.  The template file has the final say on the data structure.  
You can send the exact same flow data via sendPage* to two different 
JXTG template pipelines and get functionally different XML structures 
(depending on which objects are accessed and how).

MVC contexts.  That's my new buzzword for the day.

When you pull an RSS feed from another site for inclusion in your own 
site, what is that RSS feed to you?  A model right?  But to the remote 
site, the RSS is a view (to a remote database for example), not a 
model.  It's the final form for others to use.  But it's still RSS data 
isn't it?  How can it be both a model and a view?  Because it's used in 
a different context.  The web browser is the same thing.  To Cocoon, 
HTML is absolutely the view.  But to a web browser, it's the model.  The 
HTML renderer+CSS is the view, and the client-side Javascript is clearly 
the controller.  Isn't this all true and MVC is actually dependant upon 
the scope of the problem at hand.

In a high-level view of Cocoon, the model is the object model (Hibernate 
et al.), the view is the pipeline, and the controller is Flow.  But what 
is a generator in a pipeline?  The model?  But the model from Cocoon 
came from Flow?  But the pipeline is under no obligation to use that 
data from Flow.  It can use a subset of that data, iterate a data 
structure multiple times or ignore the data altogether.  In addition, it 
has access to data that Flow didn't give it (Actions).

So while it is the view for Cocoon at large, it is in fact it's own MVC 
context that grabs data as needed just as a client-side Javascript can 
grab updated information from the server without refreshing the main 
page (GMail).  Who decides if that data is to be used?  The pipeline 
logic as controller.  (Ack!)  The generator determines which template to 
use.  In the case of JXTG, it loads the template file *and* injects 
data.  The difference between displaying a template file raw or with 
data is the choice of pipeline components in the sitemap, not Flow.  
Flow can only produce it's "view" of the working data set and request a 
particular pipeline that presumably would put that data structure to 
good use.  The pipeline is the controller here.

A browser makes separate requests for the HTML, Javascript files and CSS 
stylesheets to produce the final view for your pleasure.  You don't see 
the MVC in a browser, you see the result of the MVC, the view.  Wasn't 
this the entire point of the last ten years of the W3C?  To specify HTML 
as a model, Javascript as a controller and CSS as a view?  In a 
different MVC context of course.  ;-)

So in a pipeline MVC context you have a model, the inital XML (static or 
a template) that determines the initial structure of the data events 
(generating SAX events where there previously were none), the controller 
with modular logic (pipeline-specified transformers and actions), and 
the serializer as view (what the outside world sees).

Cocoon had MVC before the advent of Flow.  It simply had a single 
complete MVC context.  Flow gave rise to a new concept I've heard on 
this list repeatedly, MVC+.  But are we really defining M, V, C and some 
new, unnamed letter/symbol?  I think rather a new MVC context has been 
born -- or rather that the high level view of Cocoon has finally 
achieved true MVC -- and it's highly unusual because it occurs within 
the same application server context rather than in separate 
processes/machines as was the case with extenal RSS feeds and web browsers.

When you specify the following pipeline matcher

  <map:match pattern="*.html">
    <map:generate type="file" src="{1}.xml"/>
    <map:transform type="xslt" src="xml2html"/>
    <map:serialize type="xhtml"/>

What's the controller?  I believe it's the pipeline assembly.  The 
actions and the transformers.  The view?  The serializer.  The model?  
Why, the XML document of course.  But can't XSLT stylesheets put in all 
sorts of new data either through includes or just by putting the data 
hard-coded in the primary stylesheet?  The above pipeline doesn't cover 
the case where a stylesheet simply produces other non-presentational XML 
as part of processing -- injecting new data into the model.  Doesn't 
I18nTransformer commonly get locale information from an input module and 
grab data from external catalogs to augment what's already there?  And 
actions?  Generating new data and injecting it into the pipeline at 
arbitrary points.

The generator defines the data model, right.  The template file is the 
initial data structure model though.  Sure, it's "waiting" for data 
injection, but isn't a template with i18n attributes doing the same?  
Once again, if the pipeline is changed in the sitemap to a static 
pipeline as listed above, doesn't the call to sendPage* still work?

Hell, if anything, the template generator is it's own MVC context.  The 
Flow objects as model, the template as controller and the output as view.

But the sitemap doesn't reflect this.

  <map:generate type="jx" src="mytemplate.jx"/>

In every other context, "src" refers to the model.  Now that it's being 
called by Flow, it's the controller.  Well...  If you use JXTG, it's the 
controller.  Otherwise, it's still specifying the model -- the input source.

So what were the advantages of template transformers again?

  <map:match pattern="*.html">
    <map:generate type="file" src="{1}.xml"/>
    <map:transform type="macro-expansion"/>
    <map:transform type="template"/>
    <map:transform type="lenses"/>
    <map:transform type="xslt" src="doc2html"/>
    <map:serialize type="xhtml"/>

And what is lost?  Well, complexity for one.  The template transformer 
has to worry about data -- values and iteration -- and nothing else.  
The macro expansion, taglib, Eugene engine can be a Tag java inteface 
with a lookup catalog, a set of stylesheets, or whatever is technically 
superior without having to muck with the template component code.  Not 
just making them more modular and easily "pluggable".  Pulling them out 
altogether and letting the site administrator sort it out through the 
sitemap.  Letting the site administrator determine how data goes through 
the pipeline and gets processed was the management intention behind the 
sitemap, wasn't it?

Sticking with the "template processor must be a generator" mindset I 
think misses the fact that it has already lost the strictly clean MVC 
war.  While I think what I've suggested would be better, it's actually 
immaterial.  I'm simply saying that we shouldn't necessarily be locking 
ourselves into past decisions because I'm not entirely convinced they 
were the right ones or based on logical necessity.  This is true because 
of the use of the "src" attribute, the fact that MVC is in fact 
compartmentalized and Flow's data injection role in the pipeline 

- Miles "More Fuel For the Fire" Elam

View raw message