httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tony Finch <>
Subject Re: Thoughts on filter-chain composition
Date Tue, 12 Sep 2000 06:23:02 GMT wrote:
>Tony Finch writes...
>>  I think your suggested way of going about this is completely wrong.
>Roger that. It's a free country. 
>All I can do is walk through your response and try to re-iterate some
>things so please allow me to do that.
>The successfull filtering of the content does not simply depend
>on MIME type in and MIME type out. It all depends on what is
>actually INSIDE the object itself.

Hmm, yes, I did miss a big point there. Try writing more concisely so
that it's easier to spot the part of your message that has something
new to say :-) I shall now invoke "TOKILEY response Plan B":

I think your suggested way of going about this is completely wrong. 
It would probably be easier to do on-the-fly image conversion without
trying to use a one-to-one map from NetPBM filters to Apache filters;
instead there would be one Apache filter for generic image filtering
which initially looks at the image header to decide how to set up the
NetPBM filters, and after that forms an encapsulation around all the
NetPBM processing that makes it look like a single-stage process to
Apache. I.e. one filter that does one big thing well, rather lots of
small filters that each do one small thing well.

Note that this discussion is at a different level from the one about
content-encoding: all the metadata for that decision is available
before the request handler starts (so my earlier assertions remain
true in that case), whereas for your image filter part of the metadata
is the header of the image file and that file isn't opened until the
request handler starts. This is a feature that it shares with the
output of CGIs -- they also produce metadata "late" which may require
altering the filter stack (canonical example: to add SSI processing).
We've also talked in the past about filtering headers, which magnifies
the problem: as the filters change the headers the filter stack must
change accordingly.

Now that I've spilled worms all over the place I'd like to put the lid
back on the can, but I don't know how to do it properly. I'm quite
strongly inclined to be conservative for 2.0 and avoid the header
filtering problem as much as possible. I.e. the makeup of the filter
stack is decided purely on the basis of HTTP metadata (i.e. we define
the problem so that we can ignore the headers of image files etc.) and
as much as possible before the content handler runs. In order to
support CGI -> SSI etc. the content handler can tweak the filter stack
late, after the content handler starts but before any calls to
ap_pass_brigade, but the filters themselves may not change the
headers. I think that is enough functionality to fulfill the main
goals of 2.0 and to give us the practical experience of filters that
we need in order to design a complete system for filtering metadata
that isn't a dog's dinner.

The main problem with this plan is that there must be some special
allowances made for implementation -> network charset filters. This
code's sole purpose in life is to filter metadata (headers and chunk
tags) so it would seem to be excluded. However it doesn't change the
meaning of the data so it can be wedged in with the aforementioned
special allowances. We can worry about finding a properly orthogonal
approach later.

en oeccget g mtcaa    f.a.n.finch
v spdlkishrhtewe y
eatp o v eiti i d.

View raw message