httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject Re: filtered I/O - flow control
Date Thu, 01 Jun 2000 20:15:11 GMT

> >           If at any point there is too much data to write to
> > the network, it will be up to Apache's internal buffering to catch it and
> > hold it until the socket is ready.  
> Seems like that could potentially cause a large increase in storage use
> for huge files.  

There is no more potential for this than there is currently.  Remember,
that in both approaches, the maximum that can be sent to the filter is the
current maximum that can be sent to the network.  In both approaches the
amount of data returned from the filters is likely to grow very large, but
in both approaches, the filter is going to end up being limited by the
same limits that Apache has.  I believe this is a red herring.

> If I understand how it works today correctly, we typically have only 2
> moderately sized BUFF buffers tied up for output - one for sockets I/O,
> and one for either file I/O or CGI output.  The process/thread ends up
> blocking on some socket operation (select, write, writev).  When the
> socket becomes ready, we unwind to ap_bwrite's caller (ap_send_fd_length
> for example) who iterates a loop to read the next piece of data from the
> file/CGI pipe.  This is a back pressure mechanism that limits the amount
> of data that Apache needs to buffer at any instant, which I think is a
> Good Thing.  I'd hate to loose it with filters.

The same back pressure should exist with either layer mechanism.


Ryan Bloom               
406 29th St.
San Francisco, CA 94131

View raw message