httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justin Erenkrantz <jerenkra...@ebuilt.com>
Subject Re: Buckets destroy not cleaning up private structures?
Date Tue, 01 May 2001 00:01:29 GMT
On Mon, Apr 30, 2001 at 07:57:52PM -0400, Cliff Woolley wrote:
> By "fairly expensive", I presume you mean this little block, which is
> linear with the number of files cached:
> 
<exactly what I had in mind>
> 
> It's certainly no worse than that.

Yes, even a linear scan scares me.  I'm not the performance expert Dean 
is, but that seems "fairly" expensive to me to do on *every* request.  
mod_file_cache should be fast as we can make it.  But, you could make the 
case that the linear-scan tradeoff is worth it.  My gut feeling is a
reader/writer lock implementation might scale better.  But, I'd like to
see the code to prove that...

> You can even make it constant time by assuming that none of the files are
> mmaped to start with.  Just before you serve a request, check to see if
> the file is MMAPed.  If it's not, but it is after the request, mmaped_size
> += b->length again.  But that might require some kind of locking, which is
> (I'm guessing) what you were getting at.  Yeah, it could be a bit hairy to
> get a precise answer.  An estimate might be sufficient and easier, I don't
> know for sure.

Yup.  It really depends on the scale of what you are expecting 
mod_file_cache to handle.  I'd suppose that the more files you have, the 
*more* you would want to use mod_file_cache.  I doubt that a site with a 
hundred-odd pages would even think about caching it.  They'd get such a low 
amount of traffic that almost any implementation of HTTP would do fine.  
High-volume sites (such as cnet.com, for example) are sensitive to such 
"fairly expensive" things, but can possibly get a big performance win by
leveraging mod_file_cache (although if they use SSIs that might not matter
much - since mod_file_cache wouldn't be involved...).

Didn't Ian just submit a patch to skip evaluation of some environment 
variables in mod_include that increased his numbers by 25%? - if so, 
that's big...

> At any rate, even if we don't try to track how much address space we've
> used up, it would still be way, way better after fixing the leak than what
> we have now, which uses up address space something like:
> 
> sum_foreach_file_cached(sizeof(file)*num_requests_where_file_read).
> 
> <shrug>

I know *anything* is an improvement over what is there right now.  I'm
just thinking of the extreme cases for the proposed solution.  This is a 
minor quibble - don't mind me.  =)  -- justin


Mime
View raw message