httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dean Gaudet <dgau...@arctic.org>
Subject Re: Making pool allocation functions smarter
Date Thu, 26 Jun 1997 19:46:26 GMT
The current code performs extremely well ... because it doesn't allow
realloc or free there is no bookkeeping within a block.  For many servers
and modules this is all they need.  I'm happy with what you're saying
provided you really can prevent it from costing more unless free/realloc
are used. 

In my experience you need to at least store the length of each returned
object s at ((unsigned *)s)[-1], or something like that.  You always make
sure an allocation is large enough to fit your free-list structure in it. 
And coalescing is an absolute must.

There are some cute tricks you can do when your "input sizes" are random
... tricks looking like skip lists.  They might be used in some of the
existing free allocators out there (i.e. gnu, perl, freebsd, linux). 

Dean

On Thu, 26 Jun 1997 rasmus@bellglobal.com wrote:

> Something to think about for 2.0 or even 1.3.
> 
> The current palloc mechanism is nice and simple, but not extremely flexible.
> We might consider improving this bit of the code in 2.0.  One thing that
> perhaps goes against the pool-based philosophy, but one that I have wished 
> for a number of times is a prealloc() function.  That is, being able to grow
> a chunk of memory obtained from a pool or sub-pool.
> 
> This would mean a number of things.  First, it effectively means we would need
> a pfree() and we would need to be able to keep track of chunks of memory 
> within each block.  Instead of just using up memory at the end of the current
> block, we would scan through available free'ed chunks within the blocks and
> pick the first that was large enough.  
> 
> We currently keep a free block list.  Each block on this list has a start
> pointer, an end pointer and a first available pointer.  We check if the
> end pointer minus the first available is large enough to hold the requested
> size.  This mechanism makes sure chunks at the end of each block do not
> go completely unused.  
> 
> Given the fact that we already have this free block mechanism, it is
> conceivable that a pfree() would simply convert a chunk that is currently
> part of a block and convert the chunk into a separate block and add it to
> the free block list.  
> 
> ie.
> 
> A sub-pool at some instant may look like this:
> 
> A linked list of blocks where * indicates used memory and - free:
> 
>  block A      block B    block C
>  ********---  ******--   ******----
> 
> Say we wanted to make a chunk in the middle of block A available again.
> ie:
> 
>  block A      block B    block C
>  ***---**---  ******--   ******----
> 
> This would end up being:
> 
>  block A      block A1   block A2   block B   block C
>  ***          ---        **---      *****--   ******----
> 
> A quick optimization check could be added which in the case where the
> first chunk in A2 might get free'ed.  block A1 and A2 being contiguous
> memory could be collapsed into a single block.
> 
> >From a performance point of view, there should be little or no impact.
> If pfree() is never called, there is no impact.  This doesn't add any
> extra checks.  If pfree() is called a lot, we could end up with a lot of
> fragmentation and thus a lot of small block on the block freelist.  Adding
> a minimum size before a free'ed chunk can be turned into a separate block
> should hopefully take care of that.  It's not the little chunks that worry
> me anyway.  It's the larger chunks I would like to have some way to re-use
> without clearing the pool.
> 
> Ok, I hope I explained that well enough.  It looks like it could be done
> cleanly, and it would be a big win in situations where you can't clear a
> sub-pool, but you know you are leaving orphan chunks of memory around.
> 
> And yes, I am volunteering to write and test this unless there are serious
> objections to the concept.
> 
> -Rasmus
> 


Mime
View raw message