httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Roy T. Fielding" <field...@kiwi.ICS.UCI.EDU>
Subject Re: what are the issues? (was: Re: Patch review: ...)
Date Fri, 30 Jun 2000 06:30:51 GMT
>Point is: the char* callback does exactly what an ioblock/bucket callback
>would do on its own when it must examine each byte.
>
>So, I will state again: the char* callback is not a problem. If you
>disagree, then please explain further.

There is a significant flaw in that argument.  char * doesn't do what we
want when a filter does not have to examine each byte.  That is the problem.

It doesn't make any sense to have two filter interfaces when you can
accomplish the same with one and a simple parameter conversion function.

>1) there is nothing in my framework for lists of buckets
>2) presume there is a new put_buckets() API for sending a list of buckets
>3) put_buckets() would iterate over the buckets, map them into a char*, and
>   call the callback for each one.

That would not be a solution. The purpose of passing a list of buckets around
is to linearize the call stack for the frequent case of filtered content
splitting one large bucket into separate buckets with filtered results
interspersed in between.  The effect is that a filter chain can frequently
process an entire message in one pass down the chain, which enables the
stream end to send the entire response in one go, which also allows it
to do interesting things like provide a content length by summing the
data length of all the buckets' data, and set a last-modified time
by picking the most recent time from a set of static file buckets.

I think it would help if we stopped using artificial examples.  Let's
try something simple:

       socket <-- http <-- add_footer <-- add_header <-- send_file

send_file calls its filter with an ap_file_t bucket and End-of-Stream (EOS)
in the bucket list.  add_header sets a flag, prepends another ap_file_t
bucket to the list and sends the list to its filter.  add_footer looks
at the list, finds the EOS, inserts another ap_file_t bucket in
front of the EOS, and sends the list on to its filter.  http walks through
the list picking up the (cached) stat values, notes the EOS and seeing
that its own flag for headers_sent is false, sets the cumulative metadata
and sends the header fields, followed by three calls to the kernel to
send out the three files using whatever mechanism is most efficient.

The point here isn't that this is the only way to implement filters.
The point is that no other interface can implement them as efficiently.
Not even close.  Yes, there are cases where string filters are just as
efficient as any other design, but there is no case in which they are
more efficient than bucket brigades.  The reason is that being able
to process a list of strings in one call more than offsets the extra
cost of list processing, regardless of the filter type, and allows
for additional features that have benefits for http processing.
Like, for example, being able to determine the entire set of resources
that make up the source of this dynamic resource without teaching every
filter about WebDAV.

In reference to some other messages, it isn't necessary for us to wait
for a content length -- HTTP/1.1 chunked does work just fine.  However,
that doesn't mean it isn't preferable to use content-length whenever
possible, since not all clients are HTTP/1.1 and most browsers are able
to present a better progress-bar if they have the content-length in
advance of the data.

....Roy

Mime
View raw message