Return-Path: Delivered-To: apmail-jakarta-avalon-dev-archive@apache.org Received: (qmail 44945 invoked from network); 10 May 2002 14:51:33 -0000 Received: from unknown (HELO nagoya.betaversion.org) (192.18.49.131) by daedalus.apache.org with SMTP; 10 May 2002 14:51:33 -0000 Received: (qmail 5235 invoked by uid 97); 10 May 2002 14:51:33 -0000 Delivered-To: qmlist-jakarta-archive-avalon-dev@jakarta.apache.org Received: (qmail 5201 invoked by uid 97); 10 May 2002 14:51:32 -0000 Mailing-List: contact avalon-dev-help@jakarta.apache.org; run by ezmlm Precedence: bulk List-Unsubscribe: List-Subscribe: List-Help: List-Post: List-Id: "Avalon Developers List" Reply-To: "Avalon Developers List" Delivered-To: mailing list avalon-dev@jakarta.apache.org Received: (qmail 5189 invoked by uid 98); 10 May 2002 14:51:32 -0000 X-Antivirus: nagoya (v4198 created Apr 24 2002) Message-ID: <3CDBDE22.7020008@tanukisoftware.com> Date: Fri, 10 May 2002 23:50:10 +0900 From: Leif Mortenson User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.0rc1) Gecko/20020417 X-Accept-Language: en-us, ja MIME-Version: 1.0 To: Avalon Developers List Subject: Re: DO NOT REPLY [Bug 8970] New: - ResourceLimitingPool doesn't respect maximum pool size References: <20020510092658.26213.qmail@nagoya.betaversion.org> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Rating: daedalus.apache.org 1.6.2 0/1000/N X-Spam-Rating: daedalus.apache.org 1.6.2 0/1000/N I just added a test for this and it seems to be working correctly. Is there any possibility that the threads are coming from someplace else in your system? Try doing a CTRL-\ to get the JVM to give a thread dump. Each of the worker threads will be named as such so it should be fairly clear. The thread pool kicks out fairly good debug info about when poolables are created, destroyed and reused. Try turning on debug output and see if that helps. That said, I could have missed something. If so, the debug output would be quite helpful. Cheers, Leif bugzilla@apache.org wrote: >DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG >RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT >. >ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND >INSERTED IN THE BUG DATABASE. > >http://nagoya.apache.org/bugzilla/show_bug.cgi?id=8970 > >ResourceLimitingPool doesn't respect maximum pool size > > Summary: ResourceLimitingPool doesn't respect maximum pool size > Product: Avalon > Version: unspecified > Platform: Sun > OS/Version: Solaris > Status: NEW > Severity: Major > Priority: Other > Component: Excalibur > AssignedTo: avalon-dev@jakarta.apache.org > ReportedBy: John.Webber@jentro.com > > >Our application uses an >org.apache.avalon.excalibur.thread.impl.ResourceLimitingThreadPool; we use the >single-arg constructor (max pool size). There is only one instance of RLTP in >our application. > >After running a while (a network management application that opens a large >number of TCP/UDP sockets) we find that the number of WorkerThreads in the >thread stack exceeds the set maximum; the number seems to increase slowly with >time. We tried with a maximum of 25 and with a maximum of 5 and observed the >same behavior. > >Here is a code snippet showing our use of the thread pool. >MonitorWorker is a Runnable. The threadPool variable is >ResourceLimitingThreadPool. The workerPool is a simple pool of MonitorWorker >objects (based on ideas from Doug Lea's Concurrent Programming in Java, 2nd Ed.) > >for( int i = 0; i < monitors.length; i++ ) >{ > MonitorWorker nextWorker = null; > try > { > Thread.sleep(sleepTime); > synchronized( this.monitors ) > { > if( i < this.monitors.length ) > { > nextWorker = this.workerPool.getWorker(); > nextWorker.resetMonitor(this.monitors[i]); > } > else break; > } > this.threadPool.execute(nextWorker); } > catch (InterruptedException ie) > { > this.logger.debug("Execution interrupted"); > } >} > >The tested system is a Sun SPARC Ultra 60 running Solaris 5.8. Java VM is Java >1.2.2 46.0(Sun Microsystems Inc.) We're using a minimum heap size of 32M, max of >132m. > >If you need any more information I'll be happy to help! > >-- >To unsubscribe, e-mail: >For additional commands, e-mail: > > > > -- To unsubscribe, e-mail: For additional commands, e-mail: