Return-Path: Delivered-To: apmail-new-httpd-archive@apache.org Received: (qmail 29952 invoked by uid 500); 4 Jun 2001 17:18:56 -0000 Mailing-List: contact new-httpd-help@apache.org; run by ezmlm Precedence: bulk Reply-To: new-httpd@apache.org list-help: list-unsubscribe: list-post: Delivered-To: mailing list new-httpd@apache.org Received: (qmail 29893 invoked from network); 4 Jun 2001 17:18:53 -0000 Message-ID: <3482305AF0F6CF469ED45C0D48FAFCF7091FF450@cnet48.cnet.com> From: Ian Holsman To: "'new-httpd@apache.org'" Subject: RE: file/mmap buckets, subrequests, pools, 2.0.18 Date: Mon, 4 Jun 2001 10:13:30 -0700 MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2653.19) Content-Type: text/plain; charset="iso-8859-1" X-Spam-Rating: h31.sny.collab.net 1.6.2 0/1000/N couldn't we have it so that the 'sub-handlers' request pool is joined with/the same as the main request's pool, (this is different to the 'connection' pool right?) so that sub-requests live for the life of the request... It looks like that is what the function apr_pool_join does in 'debug' mode > -----Original Message----- > From: rbb@covalent.net [mailto:rbb@covalent.net] > Sent: Friday, June 01, 2001 1:49 PM > To: new-httpd@apache.org > Subject: Re: file/mmap buckets, subrequests, pools, 2.0.18 > > > On Fri, 1 Jun 2001, Greg Stein wrote: > > > On Fri, Jun 01, 2001 at 11:00:08AM -0700, rbb@covalent.net wrote: > > >... > > > This is realitively simple. A while ago, I changed the > default handler to > > > use the connection pool to fix this problem. A couple of > months ago, Dean > > > pointed out that this was a major resource leak. After > 2.0.16, somebody > > > (Roy?) pointed out that this was a pretty big problem > when serving a lot > > > of very large files on the same connection. > > > > > > The solution was a simple loop at the end of the > core_output_filter that > > > reads in the data from the file to memory. This is okay > to do, because we > > > are garuanteed to have less than 9k of data. It sounds > like the problem > > > is that we don't read in the data if we have an MMAP, or > we may not be > > > getting into the loop on sub-requests. > > > > What about the idea to have setaside() take a pool > parameter? The bucket > > should ensure that its contents live at least as long as the pool. > > > > For an MMAP bucket, if the given pool is the same or a > subpool of the mmap's > > pool, then nothing needs to happen. If the pool is a parent > of the mmap's > > pool, then the bucket needs to read its contents into a new > POOL bucket > > attached to the passed-in pool. > > > > Other buckets operate similarly. This would ensure that we > can safely set > > aside any type of bucket, for any particular lifetime > (whether that is for a > > connection or a request or whatever). > > Yes, that would work as well. I am beginning to think that this is > overkill for our use cases, and it wouldn't really solve this problem, > since the sub_request_output_filter still wouldn't be calling > setaside. > Also, when a regular filter calls setaside, which pool does it use? I > would guess c->pool, but that could get confusing. > > My only other concern is actually walking all the way back up > to ensure > that the current pool is a decsendant of the pool passed to setaside. > Those tests should be quick, but we will be calling setaside > a lot through > the course of some requests. I am positive that we only want > to do this > "copy anything under 9k to non-volatile location" in two > places, whereas > setaside is potentially called from every filter. If the setaside > function is ever called incorrectly, we will end up doing the > copies far > more often than we need/want to. > > Those are just my concerns though, not a reason not to do the work. I > just figure that by getting this stuff out in the open early, > we can avoid > some annoying head-aches. > > Ryan > > ______________________________________________________________ > _________________ > Ryan Bloom rbb@apache.org > 406 29th St. > San Francisco, CA 94131 > -------------------------------------------------------------- > ----------------- > >