Return-Path: Delivered-To: apmail-apr-dev-archive@apr.apache.org Received: (qmail 94548 invoked by uid 500); 1 Mar 2001 19:36:30 -0000 Mailing-List: contact dev-help@apr.apache.org; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Delivered-To: mailing list dev@apr.apache.org Delivered-To: moderator for dev@apr.apache.org Received: (qmail 75517 invoked from network); 1 Mar 2001 19:25:32 -0000 Errors-To: Message-ID: <020c01c0a285$4d1e37f0$96c0b0d0@roweclan.net> Reply-To: "William A. Rowe, Jr." From: "William A. Rowe, Jr." To: , References: <3A9E8DBE.9020302@holsman.net> <014301c0a280$24eb7c20$e4421b09@raleigh.ibm.com> Subject: Re: some reasons why Apache 2.0 threaded is slower than prefork Date: Thu, 1 Mar 2001 13:24:56 -0600 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 5.50.4133.2400 X-MimeOLE: Produced By Microsoft MimeOLE V5.50.4133.2400 X-Spam-Rating: h31.sny.collab.net 1.6.2 0/1000/N From: "Bill Stoddard" Sent: Thursday, March 01, 2001 12:47 PM > FWIW, last week I wrote a very simple memory allocator (apr_malloc(), > apr_calloc(), apr_free()) and replaced all the malloc/calloc/free calls in the > apr-util/buckets with the apr_* calls. It was good for a 10% performance > boost serving static pages on Windows. My allocator used intraprocess apr > mutexs which are implemented as Win32 CriticalSections. There are probably > better sync objects available (compare and swap) which would be good fora few > more %. > > If anyone is interested, I'd be happy to post it to the list in all it's > unfinished/unrefined glory. +1! This offers us the opportunity to add all sorts of validation to the server. However... we once discussed the possibility of extending the 'pool' concept to wrap any sort of memory, in a manner that cleanups could always be introduced. Creating a context (I think this is why we renamed to _ctx_ for a while) would provide a scope. apr_free on a pool would be a noop, on a malloc would free. Can we adopt the following? apr_mem_alloc apr_mem_alloc_clear apr_mem_free (mem_ could be m_ or simply m with no trailing underscore, your choice Bill). The upshot? apr_ctx_alloc, free, etc (probably in apr-util) could delegate to the appropriate apr_pool, apr_mem, or whichever implemented allocation schema we have implemented. But FirstBill, your functions appear very platform specific so they belong in apr.