Return-Path: Delivered-To: apmail-jakarta-tomcat-user-archive@apache.org Received: (qmail 83997 invoked from network); 9 Dec 2002 16:19:35 -0000 Received: from unknown (HELO nagoya.betaversion.org) (192.18.49.131) by daedalus.apache.org with SMTP; 9 Dec 2002 16:19:35 -0000 Received: (qmail 24787 invoked by uid 97); 9 Dec 2002 16:19:00 -0000 Delivered-To: qmlist-jakarta-archive-tomcat-user@jakarta.apache.org Received: (qmail 24697 invoked by uid 97); 9 Dec 2002 16:18:59 -0000 Mailing-List: contact tomcat-user-help@jakarta.apache.org; run by ezmlm Precedence: bulk List-Unsubscribe: List-Subscribe: List-Help: List-Post: List-Id: "Tomcat Users List" Reply-To: "Tomcat Users List" Delivered-To: mailing list tomcat-user@jakarta.apache.org Received: (qmail 24642 invoked by uid 98); 9 Dec 2002 16:18:58 -0000 X-Antivirus: nagoya (v4218 created Aug 14 2002) Message-ID: <053a01c29f9e$91b7dbb0$2405a8c0@wookie> From: =?iso-8859-1?Q?Julian_L=F6ffelhardt?= To: Subject: tomcat,mod_jk & loadbalancing Date: Mon, 9 Dec 2002 17:17:45 +0100 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 8bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 5.00.2919.6700 X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2919.6700 X-Spam-Rating: daedalus.apache.org 1.6.2 0/1000/N X-Spam-Rating: daedalus.apache.org 1.6.2 0/1000/N Hi, I'm using an Apache <---> 3 Tomcats loadbalancing scenario with Apache 1.3.26 mod_jk 1.2.0 3 x tomcat 4.0.4 The tomcats are configured with: Xmx: 512m AJP13 Connector allows for 800 connections Each worker is set up like this and added to an loadbalanced worker worker.host1.port=8009 worker.host1.host=host1 worker.host1.type=ajp13 worker.host1.lbfactor=10 worker.host1.socket_timeout=300 I keep experiencing the following problems: -The threadcount of each tomcat process keeps increasing (never decreases) - The memory usage keeps also increasing - When examinig the tomcat process with "ps -aux" I see many(>200) threads older than 1 day. I thought that teh socket_timeout would always stop such threads after 5 minutes The AJP-Connectior threads keep throwing exceptions like: 2002-12-09 16:58:59 Ajp13Processor[8009][583] process: invoke java.net.SocketException: Socket closed at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:126) at org.apache.ajp.Ajp13.send(Ajp13.java:525) at org.apache.ajp.RequestHandler.finish(RequestHandler.java:495) at org.apache.ajp.Ajp13.finish(Ajp13.java:395) at org.apache.ajp.tomcat4.Ajp13Response.finishResponse(Ajp13Response.java:196) at org.apache.ajp.tomcat4.Ajp13Processor.process(Ajp13Processor.java:464) at org.apache.ajp.tomcat4.Ajp13Processor.run(Ajp13Processor.java:551) at java.lang.Thread.run(Thread.java:536) 2002-12-09 13:09:21 Ajp13Processor[8009][349] process: invoke java.lang.IllegalArgumentException: Cookie name path is a reserved token at javax.servlet.http.Cookie.(Cookie.java:185) at org.apache.ajp.tomcat4.Ajp13Request.addCookies(Ajp13Request.java:189) at org.apache.ajp.tomcat4.Ajp13Request.setAjpRequest(Ajp13Request.java:148) at org.apache.ajp.tomcat4.Ajp13Processor.process(Ajp13Processor.java:446) at org.apache.ajp.tomcat4.Ajp13Processor.run(Ajp13Processor.java:551) at java.lang.Thread.run(Thread.java:536) - When a tomcat instance gets into problems some connections still seem to work while others emit: 2002-12-09 13:32:25 Ajp13Processor[8009][155] process: invoke java.lang.OutOfMemoryError problem is that the mod_jk module (in the apache server) sometimes doesn't notive that a tomcat instance has problems and keeps sending connectuions to this particular instance , thereby freezing the whole cluster. ---------- Now my question is: Do you think that upgrading to some other version of tomcat, mod_jk, apache ...... would solve some of my problems. Any experience with tomcat load-balancing under high -loads (Currently ~1 million pageviews/day). Any help would be appreciated? llap, julian l�ffelhardt - -- To unsubscribe, e-mail: For additional commands, e-mail: