Return-Path: Delivered-To: apmail-apr-dev-archive@apr.apache.org Received: (qmail 61581 invoked by uid 500); 14 Jun 2001 07:56:55 -0000 Mailing-List: contact dev-help@apr.apache.org; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Delivered-To: mailing list dev@apr.apache.org Received: (qmail 61554 invoked from network); 14 Jun 2001 07:56:54 -0000 Message-ID: <007401c0f4a7$85c9bb00$7f00a8c0@VAIO> From: "David Reid" To: "Cliff Woolley" , "APR Development List" References: Subject: Re: cvs commit: apr/include apr_sms_blocks.h Date: Thu, 14 Jun 2001 08:56:25 +0100 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 5.00.2919.6700 X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2919.6700 X-Spam-Rating: h31.sny.collab.net 1.6.2 0/1000/N OK. I'll look at adding the capability in some sane manner :) david ----- Original Message ----- From: "Cliff Woolley" To: "David Reid" Cc: "APR Development List" Sent: Thursday, June 14, 2001 5:26 AM Subject: Re: cvs commit: apr/include apr_sms_blocks.h > On Thu, 14 Jun 2001, David Reid wrote: > > > The bucket structures are done from bms, allocations for data can come from > > either ams (some other type of sms yet to be added) or from the pms using > > plain old malloc. 8192 gives us space for 127 bucket structures per thread. > > If we need more we can always add a method to get more, but given that it'll > > only be used for a single thread and be reset between each connection (if > > I've got that right) then this should be enough, shouldn't it? > > One would think, but no, not necessarily. It's possible for a filter to > split a brigade up into a million little bitty pieces, possibly even 1 > byte per bucket (granted, that'd be a pretty stupid filter). > > [side note: I just saw Ian's post, and he has some good examples of cases > where you could end up with lots of buckets at a time. I'm not sure about > the subrequests example, because each subrequest would get handled one at > a time, its buckets being dumped out to the network and the buckets freed, > generally. But it could happen, I guess.] > > It occurred to me that all you have to do to handle resets is keep a list > of the "extra" blocks you allocate and then loop through them, freeing > each one, when the reset happens. You end up with just the one > pre-allocated block you started with. > > > Anyway, grist for the mill to discuss on Friday :) > > Definitely. =-) > > > --Cliff > > >