On my page I create a PDF of the whole site.
In my sitemap I added
The second match lets Cocoon do the PDF generation. Its source is an
source, which is match by the first . There I have an aggregate
of my site.
(more comfortable would be evaluating the site.xml or the diverse book.xml)
For "pressespiegel" I have to do a (internal
request) because its a custom DTD.
Because all documents are valid document-v11 and the result not (I think), I
a stylesheet which I called after aggregation. Because I want to give a
parameter, I created
The stylesheet creates a new document with the title provided as parameter.
For each aggregated
document it creates a first level with the title of the document.
The areas of
the aggregated documents are deleted and their is converted to
(as I said).
The rest is copied.
Maybe not the best solution, but it works.
Mmmm, the PDF is corrumpted while upload ...
In the next day you can try it from:
If someone wants it, here the stylesheet:
Thanks for Forrest. :-)
Von: Tom Klaasen [mailto:email@example.com]
Gesendet am: Dienstag, 4. März 2003 16:34
Betreff: Forrest: responses
I've used Forrest to build some documentation for my client. I've presented
it to some people already, and all reactions were "ooohh..." and
"aaaahh....". Some even asked where they could find this tool, so they could
play with it themselves. And those are not Java people! Great.
When I present Forrest, I start with "it's a site", and "it's generated from
xml". "You put your xml through a command line tool, and then you get this
html". Occasionally, I change the skin, rebuild, and get some more "ooohhh".
But the final feature that makes them go "I want that" is the
auto-generation of the PDF files. Great, they think, two files with the
effort of typing one. And right they are :-)
The only thing that looks like a drawback (give a finger, and they'll want
an arm) is that all PDFs are per-page. It's not (yet?) possible to generate
one big PDF from the whole of the site. It's on Forrest's Dream List though,
so I keep my hopes up that it will get there some day. In the mean time,
I've quickly pondered what difficulties would come up when you implement
something like that, and it didn't look too easy. You'd have to build a PDF
file while crawling the site, instead of just aggregating a static
collection of PDF pipelines. At least, that's how it looks from the outside.
And that is probably the reason that it's not implemented yet. Ah, time,
But overall: fantastic job, Forresteers. You really made it possible to
concentrate on content instead of fighting with the text editor's whims.