httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Roy T. Fielding" <>
Subject Re: cvs commit: apache-2.0/mpm/src/main iol_unix.c Makefile.tmpl buff.c http_connection.c http_protocol.c http_request.c
Date Sat, 19 Jun 1999 20:15:31 GMT
>Another odd thought -- suppose we had a mux protocol (a la http/ng).  The
>buffer_list mechanism would seem to be the best solution for that... 
>because you'll be aggregating multiple sources in some arbitrary order,
>tacking on mux headers, and passing them through to the next lower layer. 

Yep, that's why I was looking at it.  The best way to do HTTP parsing
(even in 1.1) is to allocate a slightly-larger-than-normal-request
buffer, slurp in as much as you can, and push it through a layer
which is simply identifying the header field location/lengths within
the buffer rather than copying.  But to do that you need to be
prepared to read multiple pipelined requests within that allocation,
which means the equivalent of a per-connection pool, which is bad
if there are a lot of requests on a connection, unless we are also
freeing the allocations for reuse, which means keeping track of buffers.

Oh, it also needs a header field abstraction that holds a field
value as a linked list of comma-separated linked lists (the latter
being to string together values that have been modified or that
were on separated reads.  But we need to do that anyway if we want
to avoid all the string comparisons in the existing table stuff.


View raw message