httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "William A. Rowe, Jr." <wr...@rowe-clan.net>
Subject Re: filtering huge request bodies (like 650MB files)
Date Thu, 11 Dec 2003 19:50:46 GMT
At 07:01 PM 12/10/2003, Bill Stoddard wrote:
>Aaron Bannert wrote:
>>
>>[slightly off-topic]
>>Actually, I believe that mod_cgi and mod_cgid are currently broken
>>WRT the CGI spec. The spec says that a CGI may read as much of an
>>incoming request body as it wishes and may return data as soon as
>>it wishes (all AIUI). 

I agree with your reading, it's the first bug report I ever filed on Apache.

>>That means that right now if you send a big
>>body to a CGI script that does not read the request body (which
>>is perfectly valid according to the CGI script) then mod_cgi[d] will
>>deadlock trying to write the rest of the body to the script.
>>The best way to fix this would be to support a poll()-like multiplexing
>>I/O scheme where data could be written do and read from the CGI process
>>at the same time. Unfortunately, that's not currently supported by
>>the Apache filter code.
>>-aaron
>
>Interesting. Then Apache 1.3 is broken too. I believe Jeff posted a patch not too long
ago to enable full duplex interface between Apache and CGI scripts.

Unfortunately they are entirely unrelated.  The 1.3 patch would be terrific, since
on Win32 especially the pipe buffers were pretty small (until I increased them
at least to 64k inbound/outbound.)

But the 2.0 architecture is entirely different.  We need a poll but it's not entirely
obvious where to put one...

One suggestion raised in a poll bucket: when a connection level filter cannot
read anything more, it passed back a bucket containing a poll descriptor as
metadata.  Each filter passes this metadata bucket back up.  Some filters
like mod_ssl would move it from the connection brigade to the data brigade.

When a module like mod_cgi saw the last apr_brigade_read, it could then
multiplex what it wants to do with more data.  Even with things like a charset
conversion filter containing an incomplete sequence, or mod_ssl with some
data but an incomplete packet, the module could continue to do 'something
else' until that poll descriptor was signalled, then call back down the filter
chain to read more data.

Now poll buckets are a simple solution to read, but they don't work at all
for write.  mod_cgi[d] simply passes the pipe bucket out the filter chain
and that operation is always blocking.  The only valid result under today's
filter design is sent, or could not send [fatal].  The first filter that cares
reads from the cgi pipe, and transforms or writes that data.  At that point
we are deep in the output filter chain.

The only sane solution I can think of would be a hybrid.  On the read from
client/write to pipe side, we implement a poll bucket.  On the read from
pipe side, we have to actually buffer the data instead of passing the pipe
bucket down the filter chain.  So we are polling on several events;

  CGI stdin pipe ready-to-write?
    \yes - write to the pipe, and also start polling again;
    Network (pipe bucket) ready to read?
       \yes - Read again (nonblock) from the input filters
  CGI stdout pipe data-to-read?    
    \yes - read the available data (nonblock), and pass the brigade out

This ignores if the network is ready to write because we just won't *do*
anything till the CGI results have been written out.

This also ignores a filter like mod_ext_filter.  That filter implies that our
poll buckets must allow for a collection of sockets/pipes to poll on.  Two
things can happen within the mod_ext_filter_in, either it's blocked for more
data or it is truly taking it's time computing some results.  We just don't
(can't) know the answer to that puzzle from inside Apache.

So consider mod_ext_filter_in.  Let's presume there are three things that
can trigger more labor in our hypothetical input filter...

 * it needs more input to continue.  Solution: poll the network.
 * it is churning away at it's data.  Solution: poll the ext filter's stdout pipe

and the most complex case:

 * it is churning away, but the ext filter's stdin pipe is *full* still!  Even
   with more network data, we have to ignore the fact that we have more
   data to give to the ext filter till it either empties the stdin pipe or has
   more stdout pipe results for us to process.
   Solution: the mod_ext_filter_in looks at the full stdin pipe and declines
   to read more from the network.  It sets aside the current network input
   and does *not* return the network poll bucket, but instead passes it's
   own poll bucket of *both* the stdin and stdout ext filter pipes.
   
Imagine a chain of such things - we really define the problem in terms of
a set of filters that would trigger another nonblocking attempt to get the 
input chain moving again.

So that's the input side - now to consider the output side.  We can make
one assumption here for the sake of the handler - we don't need the handler
to do *anything* more until we can shoot it's cumulative results to the network.

mod_ext_filter has the same stdin/stdout blocking problems of mod_cgi, so
let's consider that complex filter case.  If mod_ext_filter sees that the filter
can accept more data, obviously that data should be written to the pipe
(nonblocking.)  So long as the ext filters stdout pipe has data, we can read
it and pass it out to the network.  It may be blocked on stdin because it is
blocking write attempting to return the stdout results.  So priority one is to
pull the data off the stdout and pass it out to the network.

What do we do when there is nothing on the ext_filter's stdout?  Unlike other
filters, we don't know if it's stuck for sufficient data to continue processing,
or it is just taking it's lazy time trying to compute results.

  * Brigade ended in EOS?  Well our caller will never try calling again,
    so ALWAYS poll on write to the ext filter's stdin and it's stdout, pulling 
    the stdout from the filter and feeding it data as it's ready for more.
    This is the *only* faux-blocking case.

Otherwise...
    
  * stdin is not full and we've written all our data to the filter?  
    Solution:  return immediately, we can let that filter keep churning away
    or stall for more input - we don't care as long as the ext filter's stdout
    has been cleared.

  * stdin is full and all the stdout results have been read and passed down?
    Solution: return immediately.  The pipe is full so the caller can't be blocked
    on us (if it contained 25 bytes and was attempting a block-read of 32 bytes
    of course it is blocked on us.)  But presume the caller can't look for more
    bytes from the stdin pipe than the pipe is capable of holding.  The CGI
    will continue to churn, but we can keep composing data.

This is *one* solution - you can probably see some alternative rules and come
up with good justifications either way.  The one side effect is that will would
only attempt to flush down more data to the client when the core handler is
ready to send more data.  I'd like to see a proof of concept where this would
be a large obstacle.

Finally, you can see other permutations with mod_proxy - I'll leave those up
to someone else to explore - and determine if they fit within the scope I've
outlined above, or if my outline was insufficient to cover some edge cases.

Bill



Mime
View raw message