Return-Path: Delivered-To: apmail-apr-dev-archive@apr.apache.org Received: (qmail 72706 invoked by uid 500); 8 Jul 2001 17:19:57 -0000 Mailing-List: contact dev-help@apr.apache.org; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Delivered-To: mailing list dev@apr.apache.org Received: (qmail 72684 invoked from network); 8 Jul 2001 17:19:56 -0000 Date: Sun, 8 Jul 2001 10:19:33 -0700 From: Justin Erenkrantz To: dean gaudet Cc: Brian Pane , apr-dev@apache.org Subject: Re: Observations on fragmentation in SMS pools Message-ID: <20010708101933.Q28500@ebuilt.com> References: <3B481014.80707@pacbell.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: ; from dean@arctic.org on Sun, Jul 08, 2001 at 10:14:16AM -0700 X-AntiVirus: scanned for viruses by AMaViS 0.2.1-pre3 (http://amavis.org/) X-Spam-Rating: h31.sny.collab.net 1.6.2 0/1000/N On Sun, Jul 08, 2001 at 10:14:16AM -0700, dean gaudet wrote: > an ideal situation for free-lists (blocks of freed, but not free()d > memory) is one per cpu. > > a less ideal situation is one per thread. > > an even less ideal situation is one per process (which requires locking). > > it's insane to have one per pool :) I think we're shooting for #2 (less ideal). Unless someone can come up with a way to dynamically tell how many CPUs we are running on and bind a free-list to a specific CPU. We're currently doing #3 (even less ideal) in apr_pools.c. And, yeah, the current trivial SMS is doing #4. =) But, don't expect it to stay like that. And, if we implement the apr_sms_child_malloc, it gets to somewhere between #2 and #3. -- justin