apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Fritsch ...@sfritsch.de>
Subject Re: reducing memory fragmentation
Date Sat, 19 Feb 2011 19:57:32 GMT
On Saturday 19 February 2011, Greg Stein wrote:
> On Fri, Feb 18, 2011 at 16:55, Stefan Fritsch <sf@sfritsch.de> 
> > On Thursday 17 February 2011, Jim Jagielski wrote:
> >> Please also look at Greg's pocore memory allocation stuff...
> >> his hash stuff is also quite nice. Would be useful to be
> >> able to use that as well....
> > 
> > That looks like a much bigger change than what I have just
> > commited. But I agree that before trying to optimize apr's
> > allocator, one should try pocore's.
> > 
> > Have you thought about how to do this? Probably every apr pool
> > would be a wrapper for a pocore pool. But what about the other
> > users of apr_allocator, like bucket allocators?
> There is a branch in apr for wrapping pocore's pools and hash
> tables[1].

Nice, I didn't know about that.

> Obviously, the indirection slows it down, but it does
> demonstrate how it would work. (and it does: I've run the entire
> svn test suite using this wrapping)

Have you made any measurements how much the slow down is?

> My experiments show that mmap/munmap are NOT speed-advantageous on
> MacOS. But if you're looking at long-term memory usage and avoiding
> fragmentation... I don't have a good way to test that. That said,
> pocore should not be subject to fragmentation like apr. Its
> coalescing feature (designed w/ the APIs present, but not yet
> coded) will avoid much fragmentation.

I am sure that apr_allocator with mmap is not an advantage on all 
platforms. Actually, apr allocating only multiples of 4k should make 
it easier for malloc to avoid fragmentation. But glibc's malloc fails 
miserably in that regard.

For the purpose of httpd giving unused memory back to the OS, your 
current apr/pocore branch won't be an improvement because the bucket 
allocators still use apr_allocator, which will hold on to some free 

View raw message