Return-Path: Delivered-To: apmail-new-httpd-archive@apache.org Received: (qmail 52152 invoked by uid 500); 1 Mar 2001 22:23:34 -0000 Mailing-List: contact new-httpd-help@apache.org; run by ezmlm Precedence: bulk Reply-To: new-httpd@apache.org list-help: list-unsubscribe: list-post: Delivered-To: mailing list new-httpd@apache.org Received: (qmail 51287 invoked from network); 1 Mar 2001 22:23:26 -0000 X-Authentication-Warning: kurgan.lyra.org: gstein set sender to gstein@lyra.org using -f Date: Thu, 1 Mar 2001 14:23:29 -0800 From: Greg Stein To: new-httpd@apache.org Subject: Re: some reasons why Apache 2.0 threaded is slower than prefork Message-ID: <20010301142329.D2297@lyra.org> Mail-Followup-To: new-httpd@apache.org References: <3A9EB44E.E86E8A39@Golux.Com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2i In-Reply-To: <3A9EB44E.E86E8A39@Golux.Com>; from Ken.Coar@Golux.Com on Thu, Mar 01, 2001 at 03:42:54PM -0500 X-URL: http://www.lyra.org/greg/ X-Spam-Rating: h31.sny.collab.net 1.6.2 0/1000/N On Thu, Mar 01, 2001 at 03:42:54PM -0500, Rodent of Unusual Size wrote: > Cliff Woolley wrote: > > It's the "lock-free operations" part that I've been stumbling over > > so far. > > OpenVMS on the VAX dealt with this by using the low bit of the > pointer as a lock. Blocks were always 64-bit aligned, so it > was free. They used interlocked instructions that were guaranteed > atomic: BBSSI (branch on bit set and set, interlocked) and BBCCI. > I do not know how we could use that here, but it was simple and > elegant.. thread, process, and SMP safe. I once tried to created a lock-less linked-list for managing some thread state stuff in Python. I believe that it finally came to a point where I had a simple proof that you just can't implement a read/write linked list in a multi-threaded environemnt without a lock. Beats me if I can remember the key point, though. If you can quickly recover a per-thread linked-list, then you won't need any locks. We have per-thread data in APR, so APRUTIL can easily implement a system based on intraprocess locks or on a per-thread basis. Conceivably, you could also implement them on a per-pool rather than per-thread basis. This could allow you to hook your list to the connection pool (this would still cause some ramp-up malloc's on the first request of a connection, though). Cheers, -g -- Greg Stein, http://www.lyra.org/