httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Pane <brian.p...@cnet.com>
Subject Re: mod_mem_cache bad for large/busy files (Was: [PATCH] remove some mutex locks in the worker MPM)
Date Wed, 01 Jan 2003 20:30:23 GMT
On Wed, 2003-01-01 at 11:26, David Burry wrote:
> Apache 2.0.43, Solaris 8, Sun E220R, 4 gig memory, gig ethernet.  We tried
> both Sun forte and gcc compilers.  The problem was mod_mem_cache was just
> way too resource intensive when pounding on a machine that hard, trying to
> see if everything would fit into the cache... cpu/mutexes were very high,
> especially memory was out of control (we had many very large files, ranging
> from half dozen to two dozen megs, the most popular of those were what we
> really wanted cached), and we were running several hundred concurrent
> connections at once.  Maybe a new cache loading/hit/removal algorithm that
> works better for many hits to very large files would solve it I dunno.

I know of a couple of things that cause mutex contention in
mod_mem_cache:

* Too many malloc/free calls

  This may be easy to improve.  Currently, mod_mem_cache does
  many mallocs for strings and nested objects within a cache object.
  We could probably malloc one big buffer containing enough space
  to hold all those objects.

* Global lock around the hash table and priority queue

  This will be difficult to fix.  It's straightforward to provide
  thread-safe, highly-concurrent access to a hash table (either use
  a separate lock for each hash bucket, or use atomic-CAS based
  pointer swapping when traversing the hash chains).  The problem
  is that we need to read/update the priority queue as part of the
  same transaction in which we read/update the hash table, which
  leaves us stuck with a big global lock.

  If we could modify the mod_mem_cache design to not require the
  priority queue operations and the hash table operations to be
  done as part of the same critical region, I think that would
  open up the door to some major concurrency improvements.  But
  I'm not sure whether that's actually possible.

Brian



Mime
View raw message