httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rodent of Unusual Size <Ken.C...@Golux.Com>
Subject Re: filtering patches
Date Mon, 10 Jul 2000 15:44:52 GMT
rbb@covalent.net wrote:
> 
> Pools will not work for filters.  The reason is simple.  They
> don't free memory.  If I try to write 100MB and I allocate it
> from a pool, that 100MB is allocated for the length of the request.
> It can't shrink until I am done with the request.  Pools will work
> if we play games with sub-pools, but that requires that no modules
> destroy the request->pool or any of it's sub_pools.  There is one
> way to free a bucket, it is through the _destroy function.  If it
> is not destroyed, we still have the data.  Using pools is going to
> end up meaning that no filters or generators will destroy_pools
> for fear of ruining somebody else's data later in the chain.

This and something else you said makes me think your entire design
is predicated on filters only being called to operate on a chunk of
data, and having no knowledge of the progress of a request.
Essentially, some Supreme Function calls the filter and says "Here,
do your stuff with this" and just doesn't call it any more when there's
no more data.  So each filter knows only about the current chunk
of data.

Is that correct?  Because if it is, I consider it lame.  Filters
should be able to create and destroy sub-pools of r->pool whenever
they like, and be able to find them easily from call to call.  (Or
chunk to chunk.)
-- 
#ken    P-)}

Ken Coar                    <http://Golux.Com/coar/>
Apache Software Foundation  <http://www.apache.org/>
"Apache Server for Dummies" <http://Apache-Server.Com/>
"Apache Server Unleashed"   <http://ApacheUnleashed.Com/>

Mime
View raw message