httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Behlendorf <br...@wired.com>
Subject Re: Harvest cache now available as an ``httpd accelerator'' (fwd)
Date Wed, 05 Apr 1995 21:11:26 GMT
On Wed, 5 Apr 1995, David Robinson wrote:
> Input please.

There are two possibilities I see here:

1) Create a generalized caching system for any file that pays attention 
to everything it *needs* to - content negotiation, authentication, 
modification times, scripts, everything.  This is a big undertaking, one 
that's difficult to get right and will mean the code base will get very 
big.

2) Have an interface for manual selection of files to cache.  I.e., the 
administrator specifies a list of files (or subdirectories, etc) that 
they know are not susceptible to authentication or content negotiation or 
server-side includes, and when httpd fires up it loads those (few) pages 
into memory.  This works on the principle that a sizeable percentage 
of web hits come from a limited number (10-20) of pages and inlined 
images.  A stat() should still be run for every access to in-memory files 
since they could change or whatnot.  

The question for both is - is this really going to be faster on a machine 
with good I/O characteristics and large disk caches?  All we're really 
saving by using an in-memory cache is a read from disk - is this really a 
sizeable percentage of anyone's gprof output?  If not (and I'd be 
surprised if it was actually) I would think our time would be better 
spent in other performance improvements like the NCSA1.4-style hunt group 
daemon model....

	Brian


--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
brian@hotwired.com  brian@hyperreal.com  http://www.hotwired.com/Staff/brian/


Mime
View raw message