httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Cliff Woolley <>
Subject Re: Buckets destroy not cleaning up private structures?
Date Tue, 01 May 2001 00:54:09 GMT
On Mon, 30 Apr 2001, Justin Erenkrantz wrote:

> Yes, even a linear scan scares me.  I'm not the performance expert Dean
> is, but that seems "fairly" expensive to me to do on *every* request.

I agree.

> Yup.  It really depends on the scale of what you are expecting
> mod_file_cache to handle.  I'd suppose that the more files you have, the
> *more* you would want to use mod_file_cache.  I doubt that a site with a
> hundred-odd pages would even think about caching it.

True.  But there is also a limit to the number of file descriptors that a
process can have open at one time, though that limit can usually be
tweaked.  Regardless, whatever that limit is, it puts a cap on how many
pages can be cached by mod_file_cache and therefore a cap on the amount of
address space we might be talking about here...

> > At any rate, even if we don't try to track how much address space we've
> > used up, it would still be way, way better after fixing the leak than what
> > we have now, which uses up address space something like:
> >
> > sum_foreach_file_cached(sizeof(file)*num_requests_where_file_read).
> I know *anything* is an improvement over what is there right now.  I'm
> just thinking of the extreme cases for the proposed solution.  This is a
> minor quibble - don't mind me.  =)  -- justin

=-)  It's useful quibble, no doubt... in the end, I'm guessing some very
conservative approximation will act as a kind of "soft" limit, as we
probably want to avoid locking and the other hoops necessary for an exact
answer.  Just what that approximation is I don't know.

Bill?  Thoughts on this?


   Cliff Woolley
   Charlottesville, VA

View raw message