Return-Path: Delivered-To: apmail-apr-dev-archive@apr.apache.org Received: (qmail 76838 invoked by uid 500); 8 Jul 2001 17:32:14 -0000 Mailing-List: contact dev-help@apr.apache.org; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Delivered-To: mailing list dev@apr.apache.org Received: (qmail 76827 invoked from network); 8 Jul 2001 17:32:13 -0000 Date: Sun, 8 Jul 2001 10:34:22 -0700 From: Jon Travis To: dean gaudet Cc: apr-dev@apache.org Subject: Re: Observations on fragmentation in SMS pools Message-ID: <20010708103422.A15223@covalent.net> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: ; from dean@arctic.org on Sun, Jul 08, 2001 at 10:25:17AM -0700 X-Spam-Rating: h31.sny.collab.net 1.6.2 0/1000/N On Sun, Jul 08, 2001 at 10:25:17AM -0700, dean gaudet wrote: > On Sun, 8 Jul 2001, dean gaudet wrote: > > > > On Sun, 8 Jul 2001, Justin Erenkrantz wrote: > > > > > Yup. I've brought this up to Sander and David before, but this is how > > > pools > > > > woah. no way really? > > > > that's not at all how it was in 1.3 or in early 2.0 ... > > > > in 2.0 as of uh a year ago say, there was one free list per process, > > and locks were used to access it. > > i checked -- top of tree pools still behaves almost like 1.3. so i'm not > sure why you're claiming the pools would go up through the ancestors for > an allocation. > > apr_palloc first tries the simple pointer arithmetic fast path, and if > that fails it calls new_block() which accesses the process-global > block_freelist (from inside an alloc_mutex critical section). There is still all this tomfoolery with locking, though, which I think would be nice to fix with different sized buckets in the freelist. Stuff that the malloc in glibc does. I cringe at the thought of how much overhead due to abstraction this whole project is causing. -- Jon