httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject Re: filtering patches
Date Mon, 10 Jul 2000 17:14:59 GMT

Filters know about the current chunk of data and any chunk they have set
aside for themselves.  They do not know about where they are in the
request processing, or what other filters are installed.  Filters are
self-contained, just like all modules have always been.

They can create sub-pools easily, because they have the request_rec, but
the data shouldn't be allocated out of a pool.  Any data allocated out of
a pool sticks around forever.  Unless pools are modified to free memory,
or we do some funky sub-pool management, pools can't be used for filters.

What I find amazing, is that what is being asked for now, is my very first
ioblock patch, except as a recursive algorithm.  I could have had that in
a day and a half, two months ago.

The problem with pools isn't that a filter can't find a sub-pool it
created before.  The problem is that there are two ways to associate data
with pools.  Either each filter has a pool, at which point all memory has
to be allocated for each filter, and the data has to be copied.  Or, the
data has it's own pool that is passed around with the data.  Not knowing
which pool is the parent of the data's pool though, no pool can be
destroyed (to free the memory), because you don't know which sub-pools
will also be destroyed, thus freeing memory the other filters weren't done
with yet.


On Mon, 10 Jul 2000, Rodent of Unusual Size wrote:
> wrote:
> > 
> > Pools will not work for filters.  The reason is simple.  They
> > don't free memory.  If I try to write 100MB and I allocate it
> > from a pool, that 100MB is allocated for the length of the request.
> > It can't shrink until I am done with the request.  Pools will work
> > if we play games with sub-pools, but that requires that no modules
> > destroy the request->pool or any of it's sub_pools.  There is one
> > way to free a bucket, it is through the _destroy function.  If it
> > is not destroyed, we still have the data.  Using pools is going to
> > end up meaning that no filters or generators will destroy_pools
> > for fear of ruining somebody else's data later in the chain.
> This and something else you said makes me think your entire design
> is predicated on filters only being called to operate on a chunk of
> data, and having no knowledge of the progress of a request.
> Essentially, some Supreme Function calls the filter and says "Here,
> do your stuff with this" and just doesn't call it any more when there's
> no more data.  So each filter knows only about the current chunk
> of data.
> Is that correct?  Because if it is, I consider it lame.  Filters
> should be able to create and destroy sub-pools of r->pool whenever
> they like, and be able to find them easily from call to call.  (Or
> chunk to chunk.)

Ryan Bloom               
406 29th St.
San Francisco, CA 94131

View raw message