forrest-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Koberg" <...@koberg.com>
Subject RE: latest updates
Date Sat, 12 Oct 2002 13:04:08 GMT
Morning,

> -----Original Message-----
> From: Jeff Turner [mailto:jefft@apache.org]
> Sent: Friday, October 11, 2002 11:04 PM

<snip/>

> Here is the crawler algorithm in pseudocode:
>
> Add ${project.start-uri} to {links}
> For page $P in {links}:
>   Take xml file P.xml
>   Combine P.xml with book.xml and tabs.xml (to create Px.xml)
>   Render this combined file Px.xml to P.html
>   Extract links from Px.xml and add them to {links}

<snip/>

> > - is it possible to generate only one page?  Even on my machine (1.8GHz,
> > 512MB) it takes 1 minute to generate the whole site.  It would be useful
> > if I could just generate the document I'm currently working on.
>
> I think it may be technically possible, but it wouldn't be as easy as
> tools like Anakia make it.

If you relied more on XSL you could get this pretty easily.

- Create the links.xml that is the result of the crawl.
-- perhaps this can gather other information for things like different skins
within the same site, what features to turn on, labels, etc
- Use the links.xml as the main source in a single page transformation
-- pass in the page-id (P.xml path/filename?) to inidicate what page to
transform
-- use the document function to pull in the page's P.xml, book.xml and tabs.xml
---- perhaps the crawler creates a hierarchical representation and at folder
levels you simply have a name attribute. Then you would not need book.xml or
tabs.xml because you can pull that info from a hierarchical links.xml

You can use the same links.xml in something like a DefaultHandler to run through
and transform each page rather quickly. There is a simple example of this in the
download zip in this faq:

http://www.dpawson.co.uk/xsl/sect4/N9723.html#d4e306

-Rob


>
> I personally consider the Cocoon crawler a terrible tool for developing a
> website. It's _far_ too slow. It takes 9 minutes to render the Forrest
> site. Imagine doing an edit, and waiting that long to see the result!
> Even rendering a single page, the crawler is many times slower than tools
> like Anakia.
>
> To develop a real site, I'd suggest the following strategy:
>
>  - Run 'forrest webapp', and configure a Tomcat server to run the site as
>    a webapp.
>  - Symlink src/documentation/content/xdocs to
>    build/webapp/content/xdocs
>  - Edit away, and see updates almost instantaneously in your browser
>  - Only use 'forrest site' (the crawler) when you need a static HTML
>    snapshot to upload
>
> This way, one experiences Cocoon's strengths, rather than it's
> weaknesses. I developed aft.sourceforge.net in this way. I think this is
> the direction Forrest should be heading in.
>
> > - (personal taste) I would prefer "<enable-search>true</enable-search"
> > to "<disable-search>false</disable-search>" (double negation is
> > sometimes hard to quickly figure out)
>
> You're right. I think I'll change it.
>
> > - why do you copy the files to be processed into the build/tmp directory?
>
> Because we need to filter-copy some Ant @tokens@. However I'd really like
> to eliminate this copy step, because then when generating a webapp, one
> wouldn't need to symlink files.
>
> > Thank you for supporting my complains.  Forrest is really a great tool
> > don't get me wrong!
>
> :) Well, usability is pretty terrible now, but the major problems can be
> worked around, and we're working on eliminating them. Your feedback is
> very valuable.
>
>
> --Jeff
>
> > Best regards
> >
> > -Vladimir
> >
> > --
> > Vladimir R. Bossicard
> > www.bossicard.com
> >
>



Mime
View raw message