Return-Path: Delivered-To: apmail-new-httpd-archive@apache.org Received: (qmail 53353 invoked by uid 500); 3 Aug 2001 13:01:59 -0000 Mailing-List: contact new-httpd-help@apache.org; run by ezmlm Precedence: bulk Reply-To: new-httpd@apache.org list-help: list-unsubscribe: list-post: Delivered-To: mailing list new-httpd@apache.org Received: (qmail 53342 invoked from network); 3 Aug 2001 13:01:59 -0000 Sender: rederpj@raleigh.ibm.com Message-ID: <3B6AA085.2ED9B0CF@raleigh.ibm.com> Date: Fri, 03 Aug 2001 09:00:53 -0400 From: "Paul J. Reder" X-Mailer: Mozilla 4.7 [en] (X11; I; Linux 2.2.14-15mdksecure i686) X-Accept-Language: en MIME-Version: 1.0 To: new-httpd@apache.org Subject: Re: [Patch]: Scoreboard as linked list. References: <3B6A0BD4.E54E56BF@raleigh.ibm.com> <01080220062901.02210@koj.rkbloom.net> <3B6A1F5F.7060703@pacbell.net> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Spam-Rating: h31.sny.collab.net 1.6.2 0/1000/N Status: O X-Status: X-Keywords: X-UID: 87 Brian Pane wrote: > > Ryan Bloom wrote: > > >-1. As I have stated multiple times, if this uses a mutex to lock the list whenever something > >walks the scoreboard, I can't accept it. It will kill the performance for modules that I have. > > > I'm not convinced that you actually have to lock the whole list > during a scoreboard traversal. True. Finer granularity locking could be used *if* needed. > In fact, if a node's contents > are left intact when it's 'deleted' and put back on the free > list, it may even be possible to add/remove nodes without using > locks (assuming that only one thread can add/remove notes at a > time This is probably a bad assumption since each process and worker returns itself to the free list as it exits, thus there can be multiple happening at one time... > and the amount of time that a deleted node spends on the > free list is long enough for a scoreboard-walking reader that > happens to have a pointer to that node to finish reading from > that node before the node is reallocated). Since the goal, under heavy load, is to make the best possible use of workers, we want to minimize the amount of time workers spend on the free list. We can't assume that the worker spends much time on the free list, and we certainly don't want to extend that time. To that end, however, I could alter the routines to put returned nodes at the end of the free list and take them off the head. This would provide the longest possible time on the free list without artificially adding delay. > Also, the documentation > that Paul posted mentions the option of using per-process or > per-worker locking; that might offer sufficiently small granularity, > depending on what specifically your modules are doing with the > scoreboard. > --Brian Again, this is a possibility, *if* performance requires it. Using finer granularity locking adds complexity to the code. I would discourage moving to this unless the current scheme proves to be a problem. According to my testing it isn't currently a problem. Please prove me wrong and we can change it. -- Paul J. Reder ----------------------------------------------------------- "The strength of the Constitution lies entirely in the determination of each citizen to defend it. Only if every single citizen feels duty bound to do his share in this defense are the constitutional rights secure." -- Albert Einstein