httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dean Gaudet <>
Subject Re: Malloc v.s. pools (was Shared memory in APR.)
Date Wed, 21 Jul 1999 02:17:14 GMT

On Fri, 16 Jul 1999, Ralf S. Engelschall wrote:

> So when it comes to the whole topic of "whether one needs a pool facility in a
> server" we all agree, of course. But when it comes only to the topic "whether
> a malloc() wrapper library is needed for performance", I'm still not convinced
> that the statement "malloc() is slow and has to be increased for performance
> reasons by using a wrapper which allocates in larger chunks" is true.  Because
> as I said, with a reasonable malloc library doing a ``cp = malloc(<expected
> maximum memory consumption>); free(cp);'' at server startup should lead to
> mostly the same as using a wrapper library because the heap cannot shrink on
> Unix and unless the chunk handling is totally bogus in the malloc library it
> should be mostly equal. I guess I overlook something essential here, of
> course. So my question is: WHAT?

I've always found hotspots in programs around allocation/freeing... it's
one of the basic things I look into when I'm asked to make something go
faster.  There's so many different techniques to approach it... none of
them are suitable to everything.  The only way to figure out whether the
allocator is affecting things is to profile and/or replace it.  There's no
absolute statement you can make... I don't believe a "reasonable" malloc
library exists. 

I just generally expect malloc to give me first-fit... which works quite
well, and has reasonable performance.  Then I work from there. 

When you have a lot of fixed sized objects, it is far better to allocate
them in large arrays and use a simple stack for a free list. 

pools are wonderful for short transactions, but suck for long lived
connections.  Contrast HTTP and IMAP for example, pools work really well
for HTTP requests, but would suck for an IMAP session.  pools actually
suck for a long lived HTTP persistant connection, but research shows that
60 seconds is too long for any persistant connection, so we tear things
down frequently.  pools would be OK for an IMAP command parser, but would
suck for the global storage (such as cached indexes) related to the

You might notice I've just been using malloc/free in the mpm code.  That's
because as we move into this threaded world, we have to be careful of when
we use pools, and when we don't.  We will have more persistant data... we
can't use pools effectively for a file cache, for example -- because we
couldn't free the individual filenames when the cache fills. 

pools serve us really well when we have asynchronous cancellation (apache
1.x) -- because it gives a place to make a note of a resource.  But
without async cancellation (apache 2.x), fewer resources actually need to
be noted.

pools also are thread friendly because they reduce the number of times
threads have to contend for an allocator mutex. 

It'll be a fun balancing game.  allocation always is, we've just been
lucky so far because we could have tunnel vision and consider only the
multiprocess, single request model, which has almost no persistant data.


View raw message