Return-Path: Delivered-To: apmail-httpd-dev-archive@www.apache.org Received: (qmail 51312 invoked from network); 5 Dec 2005 07:14:44 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (209.237.227.199) by minotaur.apache.org with SMTP; 5 Dec 2005 07:14:44 -0000 Received: (qmail 21103 invoked by uid 500); 5 Dec 2005 07:14:43 -0000 Delivered-To: apmail-httpd-dev-archive@httpd.apache.org Received: (qmail 20240 invoked by uid 500); 5 Dec 2005 07:14:40 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 20229 invoked by uid 99); 5 Dec 2005 07:14:39 -0000 Received: from asf.osuosl.org (HELO asf.osuosl.org) (140.211.166.49) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 04 Dec 2005 23:14:39 -0800 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (asf.osuosl.org: domain of chip@force-elite.com designates 216.255.199.145 as permitted sender) Received: from [216.255.199.145] (HELO utopia.in.force-elite.com) (216.255.199.145) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 04 Dec 2005 23:14:39 -0800 X-AuthUser: chip@force-elite.com Received: from [127.0.0.1] (127.0.0.1:42691) by localhost with [XMail 1.17 (Linux/Ix86) ESMTP Server] id for from ; Sun, 04 Dec 2005 23:14:16 -0800 Message-ID: <4393E8C9.6060804@force-elite.com> Date: Sun, 04 Dec 2005 23:14:17 -0800 From: Paul Querna User-Agent: Thunderbird 1.6a1 (Macintosh/20051022) MIME-Version: 1.0 To: dev@httpd.apache.org Subject: Event MPM: Spinning on cleanups? X-Enigmail-Version: 0.93.0.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org X-Spam-Rating: minotaur.apache.org 1.6.2 0/1000/N I finally got around to upgrading to trunk w/ the Event MPM on one of my machines. Within a couple hours of starting it, I had a process spinning, and consuming 100% CPU. Backtrace from the spinning thread: (gdb) thread 6 [Switching to thread 6 (Thread 0x20450000 (LWP 100189))]#0 apr_allocator_free (allocator=0x2054ab80, node=0x205a2000) at memory/unix/apr_pools.c:334 334 if (max_free_index != APR_ALLOCATOR_MAX_FREE_UNLIMITED (gdb) where #0 apr_allocator_free (allocator=0x2054ab80, node=0x205a2000) at memory/unix/apr_pools.c:334 #1 0x280eb952 in apr_bucket_free (mem=0x0) at buckets/apr_buckets_alloc.c:182 #2 0x280e9915 in heap_bucket_destroy (data=0x205a4090) at buckets/apr_buckets_heap.c:36 #3 0x280e9a54 in apr_brigade_cleanup (data=0x2059e6b0) at buckets/apr_brigade.c:44 #4 0x280e9a8b in brigade_cleanup (data=0x2059e6b0) at buckets/apr_brigade.c:34 #5 0x282021bd in run_cleanups (cref=0x2059e028) at memory/unix/apr_pools.c:2044 #6 0x28202f39 in apr_pool_clear (pool=0x2059e018) at memory/unix/apr_pools.c:689 #7 0x08081063 in worker_thread (thd=0x81d1660, dummy=0x0) at event.c:682 #8 0x2820b3e4 in dummy_worker (opaque=0x0) at threadproc/unix/thread.c:138 #9 0x2823720b in pthread_create () from /usr/lib/libpthread.so.2 #10 0x282fa1ef in _ctx_start () from /lib/libc.so.6 OS: FreeBSD 6.0-STABLE APR: Trunk APR-Util: Trunk HTTPD: Trunk Best guess is that we corrupted a bucket brigade by double freeing it, or something of that kind. This is definitely a new behavior since the async-write code was merged into trunk. It is odd that we could of double-free'ed something on the connection pool. Maybe it isn't a double-free issue at all... I'm too tired to debug much of it tonight. Maybe later this week I will dig deeper. -Paul