httpd-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "James Richardson" <>
Subject RE: [users@httpd] Re: serving millions of files
Date Mon, 07 Feb 2005 09:40:25 GMT

(By searching, I mean looking for the file to actually open
> > it). This is because directory entries are stored unordered in
> > blocks (even true for reiser4).
> >
> > There are some physical limitations to the numbers of files on disk, I
> > can't remember them offhand.
> Are you sure ?  -  I have around 18 million on ONE ext3 partition
> because it is a Maildir from courier-imap.  :-)
> If reiserfs can not handel it, you need anoter fs like ext3.

I'm sure there are limitations. You didn't hit them yet!

I meant that reiserfs doesn't sort directory entries, not that it can't
handle lots of files. (Although in fact looks like reiser4 can sort
directory entries). This is an issue as to find the file/directory entry
in the directory block, the filesystem needs to do a linear search, or
read the whole thing, build a structure, then search that. This is
definitely slower than searching a very small unordered list.

Also, its not that the file system _cannot_ handle this stuff, and on a
sufficiently fast computer and sufficiently fast disk system, these things
may be quick, but its about how efficiently these things happen.

In these cases the "your mileage may vary" caveat does apply, but these
were my experiences writing a ~500 user document management system, using
Solaris, with EMC disk arrays.

By all means throw 100,000 files in one directory, it will work. I
wouldn't do it, though, if I were serving these files on a regular basis.
If they are there, just to be there, and rarely be accessed, then it
probably doesn't matter.

Best Regards,


The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:> for more info.
To unsubscribe, e-mail:
   "   from the digest:
For additional commands, e-mail:

View raw message