httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Stein <gst...@lyra.org>
Subject Re: filtered I/O - flow control
Date Thu, 01 Jun 2000 20:58:43 GMT
On Thu, 1 Jun 2000 rbb@covalent.net wrote:
>...
> > > > Seems like that could potentially cause a large increase in storage use
> > > > for huge files.  
> > > 
> > > There is no more potential for this than there is currently.  Remember,
> > > that in both approaches, the maximum that can be sent to the filter is the
> > > current maximum that can be sent to the network.  In both approaches the
> > > amount of data returned from the filters is likely to grow very large, but
> > 
> > The link-based approach does not "return" data. It sends it down through
> > the layers to network as it is being generated.
> 
> But it doesn't matter whether it gets sent up or down, because the
> limitation still exists.

Eh? What limitation?

In the link-based approach, I can have a filter that generates 100
megabytes of random data and deliver that out through the socket. There
are no limitations that I see.

Please explain how the hook-based approach would do this.

> > > in both approaches, the filter is going to end up being limited by the
> > > same limits that Apache has.  I believe this is a red herring.
> > 
> > Actually, I think that Greg has an incredibly valid point here. One that I
> > certainly didn't see, and one that makes me even happier with the
> > link-based approach :-)
> 
> As I said, both approaches still impose the same limits that are currently
> in 1.3

What limits?

> > > The same back pressure should exist with either layer mechanism.
> > 
> > This is untrue.
> > 
> > The hook-based mechanism completely decouples the filtered-data generation
> > from the output. The filter is expected to return the *complete* filtered
> > output through the iovec.
> 
> No, the filter is ALLOWED not expected, that is a big difference.  

Eh? If the filter does not return all the data back out through the iovec,
then where is it supposed to go? The filter cannot write it to the
network.

> > In the link-based approach, we actively call "down" through the layers,
> > ending up at the network. If the network blocks, then the filter(s) will
> > pause in its content generation.
> 
> But the link based approach is still going to try to send as much data to
> the network as the hook based approach.  This is a red herring IMHO.

The hook-based approach has no "back pressure" as Greg Ames described it.
There is nothing pushing back against the 100M-filter to slow down its
generation. It will allocate a big blob, stick that into an iovec, and
return it.

In the link-based approach, we start generating and writing the data:

{
    while (count > 0) {
       char buf[CHUNK_SIZE];
       int len = count;

       if (len > sizeof(buf)) len = sizeof(buf);
       fill_random(buf, len);
       ap_lwrite(next, buf, len);
       count -= len;
    }
}

In the above code, the ap_lwrite() will block.

Please demonstrate the hook-based approach, and how it receives
back-pressure from the network.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/


Mime
View raw message