httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexei Kosut <>
Subject Re: proxying
Date Sat, 20 Jan 1996 07:35:26 GMT
On Fri, 19 Jan 1996, Ben Laurie wrote:

> > Does anyone have any good references for designing a 'cache'?
> > I was thinking along the lines of using a strong 128-bit checksum of the
> > METHOD/URL as the cache key; the key is encoded as a filename, and I search
> > through all files called key.* to find a cache entry.

BTW, this is basically what my code does. I have a couple bits left to
write, but this weekend I should be done with a version y'all can see. 

> Speaking as one who has just had to deal with inode insufficiency - don't do
> this. Much better to use a database, for starters. It is also a well known
> fact (on older filesystems, at least) that large numbers of files in a single
> directory is a Bad Thing.

Unfortunately, there isn't really any other way. In desiging my proxy
module, I spent a long time thinking and talking to some friends who have
lots of experience in this sort of thing, and they all agreed that a
file-based system as David described really is the only way to do it. 
Because of the way Apache forks, there isn't any good way to manage a
database. DBM doesn't really cut it, because multiple Apache processes
need to be able to read and write to the database at the same time, and
DBM doesn't do that.

Besides, it may not be *that* much of a problem. Certainly not as much as,
say, news is. An informal study (punching up about:cache in my copy of
Netscape 2.0b5) shows the following data, which I'll take to be somewhat
applicable, since I don't do anything extradoinary with Netscape but look
at web pages (with images):

Max size: 2 megs (this is what I have it set to in the Preferences)
Current size: 1.6 megs
Number of files: 379
Avg size of files: 4.3k

Now, obviously... 379 files isn't going to kill anyone's inode limits. 
Even a 100 meg cache (more than that, and I have to wonder what's the
point, especially given the dymnaic nature of web content) will only have
about 25 thousand files. Our Usenet spool directory here uses 182 thousand
inodes. (and 1.9 gigs, but that's a different story). At any rate, I vote
inode use isn't a problem unless you have a *really* large cache, or an 
old OS... but then, if you have an old OS... what are you doing running a 
new web server like Apache, eh? :)

> Damn. I really should read to the end of the message. The point about inodes
> still stands. EAFS has a hard limit of 64k-some inodes. 'Nuff said.

hmm... 64k vs. 25k. I guess maybe. What is EAFS, anyhow? What comes to 
mind with that acronym is the Andrew File System, and pardon me for 
asking... but if you're going to be access a cache over the net, why not 
just go to the origin server in the first place?

--/ Alexei Kosut <> /--------/ Lefler on IRC
----------------------------/ <>
The viewpoints expressed above are entirely false, and in no way
represent Alexei Kosut nor any other person or entity. /--------------

View raw message