apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ioan Popescu <ipope...@dataq.com>
Subject Re: APR Pools
Date Thu, 02 Nov 2006 13:16:37 GMT
Bojan Smojver wrote:
> - faster than many malloc() implementations, especially on pool reuse
> 
> - can attach generic cleanup functions to them, with same lifetime
> 
> - one cleanup does it all, instead of many free()'s

Please tell me if my understanding of pools is wrong/right. A chunk of
system memory is allocated. Requests for memory are "allocated" from this
already allocated memory. What about fragmentation algorithms/problems? Yes,
I know the system also has to worry about this. Never mind. It just occurred
to me that nothing allocated is ever deallocated until the pool is
cleared/destroyed.

> - can have subpools for allocating memory in a branch of code that may 
> end up in an error, in which case the subpool can be easily dumped 
> without affecting the parent pool, therefore reducing the memory pressure

I like this.

> Apache has a notion of maximum number of requests a child process will 
> handle. This can help with memory leaks, as processes simply get recycled
> after a while, therefore freeing any "long lifetime" memory that may have
> leaked in the child during its service period. Not suitable for all apps,
> but may help in some situations.

Sounds like a good idea, but I'm developing a library. It can't just release
old memory because that might crash an application. Unless I "own" the
memory (or claim I do), I can't just release it.

Graham Leggett wrote:
> In both cases, the library user has to call some kind of cleanup function
>  to return the memory back to the library. Whether that cleanup function 
> calls "free" for all allocated blocks, or "apr_pool_destroy", it amounts 
> to the same responsiblity from the caller.
> 
> If a particular function in your library "do_stuff" has to do stuff 
> without requiring a cleanup afterwards, create a subpool at the start of 
> do_stuff, and destroy it at the end before return.

True, but I thought one was supposed to avoid per object pools? If I create
a pool for every type of object, doesn't that just make it unbounded? I
would have to implement reference counting to decide when to clear the pool.
Well, since the objects requiring reference counting aren't on the critical
path, I could implement it.

Maybe my packet processing library has some similarities to Apache. I listen
for packets, I parse the raw packets into structures allocated from a packet
pool. I run these processed packets through client provided
filters/operations. I'm finished with the packet. Sounds easy enough. How do
I "free" the packet? Just clear the pool before processing the next packet?
Unless other complications arise (working on multithreading), this sounds
like it should work for this scenario (I also have other objects that
dictate logic prior to this packet processing).

Mime
View raw message