apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Cliff Woolley <cliffwool...@yahoo.com>
Subject Re: Buckets destroy not cleaning up private structures?
Date Mon, 30 Apr 2001 18:16:52 GMT
On Mon, 30 Apr 2001, Bill Stoddard wrote:

> > the next time they're read.  But the master copy in the cache will never
> > BE read!  So the copy in the cache always remains of file type, not mmap
> > type, and copies of the master made to serve future requests will start
> > out as file type as well (enabling sendfile for those future requests).
> > Even better, if one of those later requests decides it does need to do a
> > read on the file bucket and would have MMAPed it, it discovers that an
> > MMAP of the file is already available, it just changes its type and reads
> > the MMAP with virtually zero extra work incurred.
> Does apr_bucket_copy() just make a duplicate bucket in a different
> pool using the same fd?  Should be minimal overhead if true.  I don't
> see any problems with caching the whole file bucket rather than just
> the apr_file_t.  I am concerned with MMAP'ing a file and leaving that
> MMAPed file associated with the cached file bucket.  It is reasonable
> to cache literally 100's of open file descriptors (on Windows anyway)
> but I have a big concern about the hit on memory of MMAPing the
> contents of those files.  Guess it is better than the MMAP leak though
> :-)

apr_bucket_copy() calloc's a new apr_bucket struct and directs its data
pointer at the same private data entity as the original bucket (in this
case, that's an apr_bucket_file which points to an apr_file_t).  That's
better than caching just the file handle, which incurs the calloc of the
apr_bucket *and* a new apr_bucket_file to point to the cached apr_file_t.

This is definitely better than the leak.  =-)  And nothing says that the
cache can't apr_mmap_delete() the MMAP associated with the master file
bucket as long as refcount==1 (ie, there are no requests using that
file/mmap currently in progress) if it decides it has too many MMAPs
laying around.  It's possible that the file will be re-MMAPed by a later
request, but the MMAPs could be just cycled through in an LRU fashion.
This would leak one palloc'ed apr_mmap_t in the cache's pool, however.
It's a trade-off.  Which is better: having potentially as many MMAPs open
as you have file handles cached, or growing the size of the cache pool by
sizeof(apr_mmap_t) each time you delete and recreate an MMAP for a file?
In either case, this is still much better than the current situation (the
leak), which has both problems, only worse...  ;-]


   Cliff Woolley
   Charlottesville, VA

View raw message