httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "David Burry" <dbu...@tagnet.org>
Subject Re: mod_mem_cache bad for large/busy files (Was: [PATCH] removesome mutex locks in the worker MPM)
Date Fri, 03 Jan 2003 05:54:58 GMT
interesting... so then why did using mod_file_cache to specify caching a
couple dozen known-most-often-accessed files decrease disk io significantly?
I'll try the test you mention next time I get a chance.

Dave

----- Original Message -----
From: "Brian Pane" <brian.pane@cnet.com>
To: <dev@httpd.apache.org>
Sent: Thursday, January 02, 2003 9:43 PM
Subject: Re: mod_mem_cache bad for large/busy files (Was: [PATCH] removesome
mutex locks in the worker MPM)


> On Thu, 2003-01-02 at 21:21, David Burry wrote:
> > ----- Original Message -----
> > From: "Brian Pane" <brian.pane@cnet.com>
> > Sent: Thursday, January 02, 2003 2:19 PM
> > >
> > > For large files, I'd anticipate that mod_cache wouldn't provide much
> > benefit
> > > at all.  If you characterize the cost of delivering a file as
> > >
> > >    time_to_stat_and_open_and_close +
> > time_to_transfer_from_memory_to_network
> > >
> > > mod_mem_cache can help reduce the first term but not the second.  For
> > small
> > > files, the first term is significant, so it makes sense to try to
optimize
> > > away the stat/open/close with an in-httpd cache.  But for large files,
> > where
> > > the second term is much larger than the first, mod_mem_cache doesn't
> > > necessarily
> > > have an advantage.
> >
> > Unless... of course, you're requesting the same file dozens of times per
> > second (i.e. high hundreds of concurrent downloads per machine, because
it
> > takes a few minutes for most people to get the file).... then caching it
in
> > memory can help, because your disk drive would sit there thrashing
> > otherwise.  If you don't have gig ethernet don't even worry you won't
see
> > the problem really, ethernet will be your bottleneck.  What we're trying
to
> > do is get close to maxing out a gig ethernet with these large files
without
> > the machine dying...
>
> Definitely, caching the file in memory will help in this scenario.
> But that's happening already; the filesystem cache is sitting
> between the httpd and the disk, so you're getting the benefits
> of block caching for oft-used files by default.
>
>
> > > What sort of results do you get if you bypass mod_cache and just rely
on
> > > the Unix filesystem cache to keep large files in memory?
> >
> > Not sure how to configure that so that it will use a few hundred megs to
> > cache often-accessed large files... but I could ask around here to more
> > solaris-knowledgable people...
>
> In my experience with Solaris, the OS is pretty proactive about
> using all available memory for the filesystem cache by default.
> One low-tech way you could check is:
>   - Reboot
>   - Run something to monitor free memory (top works fine)
>   - Run something to read a bunch of your large files
>     (e.g., "cksum [file]").
> In the third step, you should see the free memory decrease by
> roughly the total size of the files you've read.
>
> Brian
>
>


Mime
View raw message