httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ben Hyde <>
Subject Stacking up Response Handling
Date Wed, 23 Sep 1998 20:26:58 GMT
Alexei Kosut writes:
>The problem, as I see it, is this: Often, I suspect it will be the case
>that the module does not know what metadata it will be altering (and how)
>until after it has processed the request. i.e., a PHP script may not
>discover what dimensions it uses (as we discussed earlier) until after it
>has parsed the entire script. But if the module is functioning as an
>in-place filter, that can cause massive headaches if we need the metadata
>in a complete form *before* we sent the entity, as we do for HTTP.
>I'm not quite sure how to solve that problem. Anyone have any brilliant

This is the same as building a layout engine that incremental layout
but simpler since I doubt we'd want to allow for reflow.

Sometimes you can send output right along, sometimes you have to wait.
I visualize the output as a tree/outline and as it is swept out a
stack holds the path to the leave.  Handlers for the individual nodes
wait or proceed depending on if they can.

It's pretty design with the pipeline consisting of this stack of
output transformers/generators.  Each pipeline stage accepts a stream
of output_chunks.  I think of these output_chunks as coming in plenty
of flavors, for example transmit_file, transmit_memory, etc.  Some
pipeline stages might handle very symbolic chunks.  For example
transmit_xml_tree might be handed to transform_xml_to_html stage in
the pipeline.

I'm assuming the core server would have only a few kinds of pipeline
nodes, generate_response, generate_content_from_url_via_file_system,
generate_via_classic_module_api.  Things like convert_char_set or
do_cool_transfer_encoding, could easily be loaded at runtime and
authored outside the core.  That would be nice.

For typical fast responses we wouldn't push much on this stack at
all.  It might go something like this: Push generate_response node, 
it selects an appropriate content generator by consulting the
module community and pushes that.  Often this is 
generate_content_from_url_via_file_system which in turn does
all that ugly mapping to a file name and then passes 
transmit_file down the pipeline and pops it's self off the stack.
generate_response once back on top again does the transmit and
pops off.

For rich complex output generation we might push all kinds of things
(charset converters, transfer encoders, XML -> HTML rewriters, cache
builders, old style apache module API simulators, what ever).

The intra-stack element protocol get's interesting around issues
like error handling, blocking, etc.  

I particularly like how this allows simulation of the old module API,
as well as the API of other servers, and experimenting with other
module API which cross process or machine boundaries.

In many ways this isn't that much different from what was proposed
a year ago.  

 - ben

View raw message