cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andy Lewis <Andy.Le...@NSMG.VERITAS.com>
Subject RE: Thoughts on a data-driven web site
Date Wed, 21 Jun 2000 19:18:46 GMT
The concept of XSP independent taglibs is one I like. Taglibs are great, but
the problem with XSP is that it is a generator, not a filter. Not everything
I want to do will be originating from XSP - I will have other generators.
That seems to currently prevent me from using any of the functionality being
built with taglibs. With a growing number of Cocoon features being
implemented as XSP taglibs, I've been wondering how to approach this. Any
suggestions anyone?

Andy Lewis

"The heights of genius are only measurable by the depths of stupidity."


		-----Original Message-----
		From:	Jonathan Stimmel [mailto:jon-lists@stimmel.net]
		Sent:	Wednesday, June 21, 2000 2:31 PM
		To:	cocoon-dev@xml.apache.org
		Subject:	Thoughts on a data-driven web site

		I've spent a lot of time over the past week couple of weeks
trying
		to explain why I find Cocoon so exciting, and why I think we
need to
		use it (or at the very least, a similar architecture) in our
web sites.
		I think my approach ties into the core of Cocoon philosophy,
and I hope
		that placing it into public view will help give both Cocoon2
and my
		project a little more direction. To see the ultimate goal of
all
		this, take a look at http://www.realcities.com/.

		Most of our sites are template-driven; they take an html
chunk and
		fill in data. I don't think I need to explain to this list
the
		weaknesses in that approach (if only some of the people I've
		been talking to could understand that :{)

		In order to fully support integrating content from
(potentially)
		dozens of different data sources, I believe we need to use
the
		following model for dynamic delivery:

		 +-----------------------------+
		 |    user requests a page     |
		 +-----------------------------+
		                |
		                v
		 +-----------------------------+
		 | 1) identify content sources | <- (from index.xml, e.g.)
		 +-----------------------------+
		                |
		                v
		 +-----------------------------+
		 | 2) populate content         | <- (using a series of
lightweight taglibs)
		 +-----------------------------+
		                |
		                v
		 +-----------------------------+
		 | 3) perform rough layout     | <- (using XSL)
		 +-----------------------------+
		                |
		                v
		 +-----------------------------+
		 | 4) perform final "render"   | <- (using XSL)
		 +-----------------------------+

		Key points:

		1) The original document being read does not contain any
real content;
		it just specifies what types of content the page will
contain. There
		would be one suce document for each major section of the
site (news,
		weather, stocks, ...)

		2) This pipeline depends on the existence of fast,
lightweight tag
		libraries. I've been wondering if Cocoon2 needs an actual
tag library
		system (independent of XSP taglibs) to fulfill this need,
but haven't
		reached a conclusion yet (I'll save my thoughts for another
email).

		3) Think of this step as a home-grown fop...

		4) Combined with step 3, we can create skins, allowing
different local
		sites to have different looks, without requiring significant
tinkering
		with the rest of the page generation process.


		Other thoughts:

		This whole process is designed to be SAX-friendly; I would
gladly trade
		power for speed in the tag libraries (which shouldn't be a
problem, since
		their only function is to provide an interface to dedicates
servers which
		house the actual content).

		Initially, the only features I would need from the sitemap
would be
		minimal URL rewriting/mapping ("/category/autos" to
"/category.xml?autos")
		and multiple output channels (HTML, WAP, ...). Actually, I
could do
		the URL rewriting in Apache (which will need to happen
anyway), which
		means I can (potentially) serve the whole site using a
single pipeline:

		  generate from file "*.xml"
		  filter through taglibs
		  match output destination
		    1) HTML -> filter through "*-html.xsl" and "skin1.xsl"
		    2) WAP  -> filter through "*-wap.xsl"
		    3) ...

		Eventually we'll need some sort of
authentication/authorisation,
		but I haven't thought that out too much yet...


		Comments from the peanut gallery? ;^)

Mime
View raw message