httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Pane <brian.p...@cnet.com>
Subject RE: mod_mem_cache bad for large/busy files (Was: [PATCH] remove some mutex locks in the worker MPM)
Date Sun, 12 Jan 2003 18:39:05 GMT
On Fri, 2003-01-10 at 12:40, Bill Stoddard wrote:
> I was meaning to respond to this, but forgot until I saw the blurb in ApacheWeek
> :-)

We all really need to find time to write some code, so that
ApacheWeek will have something to cover besides design debates. :-)


> > For large files, I'd anticipate that mod_cache wouldn't provide much benefit
> > at all.  If you characterize the cost of delivering a file as
> >
> >    time_to_stat_and_open_and_close + time_to_transfer_from_memory_to_network
> >
> > mod_mem_cache can help reduce the first term but not the second.  For small
> > files, the first term is significant, so it makes sense to try to optimize
> > away the stat/open/close with an in-httpd cache.  But for large files, where
> > the second term is much larger than the first, mod_mem_cache doesn't
> > necessarily
> > have an advantage.
> 
> The read can be expensive over NFS. Yes, one would hope the file system cache
> would cover this. And perhaps it does in most cases. 

Yeah, in practice I've found that most of the load on our
NFS servers is in the form of file and attribute lookup requests
(in support of stat and cache coherency checks) rather than
actual reads, due to the effects of client side caching.
Your mileage may vary, of course.

> Generally I agree with the
> analysis. The big expenses are in the stat/open/close.
> 
> > And it has at least three disadvantages that I can
> > think of:
> >   1. With mod_mem_cache, you can't use sendfile(2) to send the content.
> >      If your kernel does zero-copy on sendfile but not on writev, it
> >      could be faster to deliver a file instead of a cached copy.
> 
> mod_mem_cache can cache open fds (CacheEnable fd /). Works really nicely on
> Windows. I have not seen much benefit testing on AIX and I don't know if there
> are other performance implications on *ix with maintaining a large number of
> open fds.
> 
> >   2. And as long as mod_mem_cache maintains a separate cache per worker
> >      process, it will use memory less efficiently than the filesystem
> >      cache.
> 
> Yep. Not a big deal if you are caching open fds though.

Definitely, caching fds is in some ways an ideal solution: it lets
the OS manage a single cache image per file, but we still get to
eliminate the stat/open/close.

> >   3. On a cache miss, mod_mem_cache needs to read the file in order to
> >      cache it.  By default, it uses mmap/munmap to do this.  We've seen
> >      mutex contention problems in munmap on high-volume Solaris servers.
> 
> This is a result of mod_mem_cache using the bucket code (apr_buckets_file). I
> think we could extrace the fd from the bucket then so a read rather than an
> mmap. Should I work on a fix for this?

I think the "EnableMMAP off" directive will turn mod_mem_cache's
mmap into a read.  It works by setting a flag in the file bucket
that tells the bucket's read function whether or not it's allowed
to use mmap.

Brian



Mime
View raw message