httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Pane <>
Subject Re: mod_mem_cache bad for large/busy files (Was: [PATCH] remove some mutex locks in the worker MPM)
Date Thu, 02 Jan 2003 22:19:51 GMT
David Burry wrote:

>>Random thoughts:
>>- Did the content have short expiration times (or recent change dates
>>would result in the cache making agressive expiration estimates). That
>>churn the cache.
>No.  files literally never change, when updates appear they are always new
>files, web pages just point to new ones each update.  In this application
>these are all executable downloadable files, think FTP repository over HTTP.

For large files, I'd anticipate that mod_cache wouldn't provide much benefit
at all.  If you characterize the cost of delivering a file as

   time_to_stat_and_open_and_close + time_to_transfer_from_memory_to_network

mod_mem_cache can help reduce the first term but not the second.  For small
files, the first term is significant, so it makes sense to try to optimize
away the stat/open/close with an in-httpd cache.  But for large files, where
the second term is much larger than the first, mod_mem_cache doesn't 
have an advantage.  And it has at least three disadvantages that I can 
think of:
  1. With mod_mem_cache, you can't use sendfile(2) to send the content.
     If your kernel does zero-copy on sendfile but not on writev, it
     could be faster to deliver a file instead of a cached copy.
  2. And as long as mod_mem_cache maintains a separate cache per worker
     process, it will use memory less efficiently than the filesystem
  3. On a cache miss, mod_mem_cache needs to read the file in order to
     cache it.  By default, it uses mmap/munmap to do this.  We've seen
     mutex contention problems in munmap on high-volume Solaris servers.

What sort of results do you get if you bypass mod_cache and just rely on
the Unix filesystem cache to keep large files in memory?


View raw message