Return-Path: Delivered-To: apmail-new-httpd-archive@apache.org Received: (qmail 9780 invoked by uid 500); 28 Jul 2001 00:55:57 -0000 Mailing-List: contact new-httpd-help@apache.org; run by ezmlm Precedence: bulk Reply-To: new-httpd@apache.org list-help: list-unsubscribe: list-post: Delivered-To: mailing list new-httpd@apache.org Received: (qmail 9769 invoked from network); 28 Jul 2001 00:55:57 -0000 Sender: gregames@Mail.MeepZor.Com Message-ID: <3B620CD7.6931EF9B@remulak.net> Date: Fri, 27 Jul 2001 20:52:39 -0400 From: Greg Ames X-Mailer: Mozilla 4.77 [en] (X11; U; Linux 2.2.19-10mdk i686) X-Accept-Language: en MIME-Version: 1.0 To: new-httpd@apache.org Subject: Re: [PATCH] threaded MPM: limit re-use of scoreboard References: <3B61EBD9.1C340754@remulak.net> Content-Type: multipart/mixed; boundary="------------6816C6B49A27FD95C607A48B" X-Spam-Rating: h31.sny.collab.net 1.6.2 0/1000/N This is a multi-part message in MIME format. --------------6816C6B49A27FD95C607A48B Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Greg Ames wrote: > > This patch prevents multiple processes which are starting up from > grabbing the same process slot in the scoreboard. Other processes may > still share that process slot while they are quiescing (due to > MaxRequestsPerChild, perform_idle_server_maintenance, or restarts). > > Since only one process per slot will be starting new threads at any > given time, it eliminates the race conditions where multiple processes > both see what looks like a vacant worker slot simultaneously. > > This works for me. I would appreciate comments, and would love to see > other folks bang on it. whoops...there's a bug with the patch I posted earlier. If multiple threads from a dying process are allowed to set "quiescing", then under worst case conditions, the same number of new processes are allowed to use the same process slot in the scoreboard. The fix is that only the first thread that exits is allowed to set "quiescing". Please use this patch instead. Greg --------------6816C6B49A27FD95C607A48B Content-Type: text/plain; charset=us-ascii; name="threaded.patch" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="threaded.patch" Index: include/scoreboard.h =================================================================== RCS file: /cvs/apache/httpd-2.0/include/scoreboard.h,v retrieving revision 1.28 diff -u -d -b -r1.28 scoreboard.h --- scoreboard.h 2001/07/26 15:53:15 1.28 +++ scoreboard.h 2001/07/28 00:40:08 @@ -178,6 +178,9 @@ pid_t pid; ap_generation_t generation; /* generation of this child */ ap_scoreboard_e sb_type; + int quiescing; /* the process whose pid is stored above is + * going down gracefully + */ }; typedef struct { Index: server/mpm/threaded/threaded.c =================================================================== RCS file: /cvs/apache/httpd-2.0/server/mpm/threaded/threaded.c,v retrieving revision 1.50 diff -u -d -b -r1.50 threaded.c --- threaded.c 2001/07/26 18:11:53 1.50 +++ threaded.c 2001/07/28 00:40:09 @@ -648,8 +648,15 @@ apr_pool_destroy(tpool); ap_update_child_status(process_slot, thread_slot, (dying) ? SERVER_DEAD : SERVER_GRACEFUL, (request_rec *) NULL); - dying = 1; apr_lock_acquire(worker_thread_count_mutex); + if (!dying) { + /* this is the first thread to exit */ + if (ap_my_pid == ap_scoreboard_image->parent[process_slot].pid) { + /* tell the parent that it may use this scoreboard slot */ + ap_scoreboard_image->parent[process_slot].quiescing = 1; + } + dying = 1; + } worker_thread_count--; if (worker_thread_count == 0) { /* All the threads have exited, now finish the shutdown process @@ -900,6 +907,7 @@ clean_child_exit(0); } /* else */ + ap_scoreboard_image->parent[slot].quiescing = 0; ap_scoreboard_image->parent[slot].pid = pid; return 0; } @@ -961,6 +969,7 @@ int i, j; int idle_thread_count; worker_score *ws; + process_score *ps; int free_length; int free_slots[MAX_SPAWN_RATE]; int last_non_dead; @@ -1002,7 +1011,10 @@ ++idle_thread_count; } } - if (any_dead_threads && free_length < idle_spawn_rate) { + ps = &ap_scoreboard_image->parent[i]; + if (any_dead_threads && free_length < idle_spawn_rate + && (!ps->pid /* no process in the slot */ + || ps->quiescing)) { /* or at least one is going away */ free_slots[free_length] = i; ++free_length; } @@ -1083,6 +1095,8 @@ for (i = 0; i < ap_threads_per_child; i++) ap_update_child_status(child_slot, i, SERVER_DEAD, (request_rec *) NULL); + ap_scoreboard_image->parent[child_slot].pid = 0; + ap_scoreboard_image->parent[child_slot].quiescing = 0; if (remaining_children_to_start && child_slot < ap_daemons_limit) { /* we're still doing a 1-for-1 replacement of dead --------------6816C6B49A27FD95C607A48B--