httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject Re: filtered I/O - flow control
Date Thu, 01 Jun 2000 20:52:22 GMT

> > > Seems like that could potentially cause a large increase in storage use
> > > for huge files.  
> > 
> > There is no more potential for this than there is currently.  Remember,
> > that in both approaches, the maximum that can be sent to the filter is the
> > current maximum that can be sent to the network.  In both approaches the
> > amount of data returned from the filters is likely to grow very large, but
> The link-based approach does not "return" data. It sends it down through
> the layers to network as it is being generated.

But it doesn't matter whether it gets sent up or down, because the
limitation still exists.

> > in both approaches, the filter is going to end up being limited by the
> > same limits that Apache has.  I believe this is a red herring.
> Actually, I think that Greg has an incredibly valid point here. One that I
> certainly didn't see, and one that makes me even happier with the
> link-based approach :-)

As I said, both approaches still impose the same limits that are currently
in 1.3

> > The same back pressure should exist with either layer mechanism.
> This is untrue.
> The hook-based mechanism completely decouples the filtered-data generation
> from the output. The filter is expected to return the *complete* filtered
> output through the iovec.

No, the filter is ALLOWED not expected, that is a big difference.  

> In the link-based approach, we actively call "down" through the layers,
> ending up at the network. If the network blocks, then the filter(s) will
> pause in its content generation.

But the link based approach is still going to try to send as much data to
the network as the hook based approach.  This is a red herring IMHO.


Ryan Bloom               
406 29th St.
San Francisco, CA 94131

View raw message