Return-Path: Delivered-To: apmail-tomcat-users-archive@www.apache.org Received: (qmail 24727 invoked from network); 1 Sep 2006 19:55:03 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (209.237.227.199) by minotaur.apache.org with SMTP; 1 Sep 2006 19:55:03 -0000 Received: (qmail 40281 invoked by uid 500); 1 Sep 2006 19:54:47 -0000 Delivered-To: apmail-tomcat-users-archive@tomcat.apache.org Received: (qmail 40261 invoked by uid 500); 1 Sep 2006 19:54:47 -0000 Mailing-List: contact users-help@tomcat.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: "Tomcat Users List" Delivered-To: mailing list users@tomcat.apache.org Received: (qmail 40241 invoked by uid 99); 1 Sep 2006 19:54:46 -0000 Received: from asf.osuosl.org (HELO asf.osuosl.org) (140.211.166.49) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Sep 2006 12:54:46 -0700 X-ASF-Spam-Status: No, hits=0.5 required=10.0 tests=DNS_FROM_RFC_ABUSE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (asf.osuosl.org: domain of edoardo.causarano@gmail.com designates 66.249.92.169 as permitted sender) Received: from [66.249.92.169] (HELO ug-out-1314.google.com) (66.249.92.169) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Sep 2006 12:54:45 -0700 Received: by ug-out-1314.google.com with SMTP id u40so1148660ugc for ; Fri, 01 Sep 2006 12:54:24 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:mime-version:in-reply-to:references:content-type:message-id:content-transfer-encoding:from:subject:date:to:x-mailer; b=WxOblwFOnzAAUzpXhAvMAlFnKUDEHyy9H0BOz1FTLu8MKklMR9VKwoWQwQrRKN3yzp/x5pbFXdp5xrfmxn/EE78P5367NvF5QYEPgNfbDeto58aefBKj00DWRLlBkIegtlS0//m8GAbU4xiB3qKHQTtH9TSB23whcGSUXqCH/ik= Received: by 10.67.101.8 with SMTP id d8mr1410115ugm; Fri, 01 Sep 2006 12:54:23 -0700 (PDT) Received: from ?192.168.1.129? ( [82.49.189.186]) by mx.gmail.com with ESMTP id j2sm2872308ugf.2006.09.01.12.54.23; Fri, 01 Sep 2006 12:54:23 -0700 (PDT) Mime-Version: 1.0 (Apple Message framework v752.2) In-Reply-To: <44F88837.6090309@hanik.com> References: <37B5E0DF-70C4-49D2-B7AD-8E8F81CDC084@gmail.com> <44F88837.6090309@hanik.com> Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed Message-Id: <97D89BB7-FDDA-4C68-A935-6A24ED3A3B24@gmail.com> Content-Transfer-Encoding: 7bit From: Edoardo Causarano Subject: Re: Clustering with mod_jk Date: Fri, 1 Sep 2006 21:54:36 +0200 To: "Tomcat Users List" X-Mailer: Apple Mail (2.752.2) X-Virus-Checked: Checked by ClamAV on apache.org X-Spam-Rating: minotaur.apache.org 1.6.2 0/1000/N Using mpm_worker gave less impressive results; I'd say about 1/2, a much worse load average (way more than 5), and lots of swap. Seems like prefork works better on linux and I'm surprised. Anyway, assuming that I got the maxProcessors wrong I should have seen queues building up @ 150*4 instead they start < 50% that value. The thing that makes me think it's a mod_jk issue is the fact that suddenly all request flow locks onto a node and stays busy until I restart apache. e On 01/set/06, at 21:21GMT+02:00, Filip Hanik - Dev Lists wrote: > since you are using prefork, you must set cachesize=1 for your > workers.properties file. > However, you have 4096 MaxClients, in order to serve this up in > tomcat, your JK connector should have maxProcessors="4096". > An alternative, and safe solution, although much less performance, > is to set "MaxRequestsPerChild 1", this way you can get away with > MaxClients 4096 and still have a much less maxProcessor value on > Tomcat > > Filip > > > Edoardo Causarano wrote: >> Hello List, >> >> scenario: >> >> - 4 node tc 5.0.28 vertical cluster ( :-| same server... still >> testing, but it could have been 8) listening on ajp >> > maxProcessors="150" minProcessors="50" >> protocol="AJP/1.3" >> protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler" >> redirectPort="8443"> >> >> - 1 httpd 2.0.52 with mod_ajp 1.2.15 and prefork config on RH AS4, >> kernel 2.6.9-5.EL >> sticky sessions are disabled to avoid stress scripts hitting >> only one node >> >> >> StartServers 40 >> MinSpareServers 80 >> MaxSpareServers 280 >> ServerLimit 4096 >> MaxClients 4096 >> MaxRequestsPerChild 4096 >> >> >> - 1 application where a couple of thousand users should hammer the >> app deployed on the webapp >> >> What happens is the app takes the stresser for a ride until 240 >> circa users then starts to die; jkmonitor sees linear increase on >> busy and max requests on only one node and pages hang; disabling >> the node moves the hung request handling to the next node. >> >> Where's the bottleneck? Any known bug in mod_jk? Should I increase >> threads on the tomcat nodes? >> >> Tnx, >> e >> >> >> >> >> >> --------------------------------------------------------------------- >> To start a new topic, e-mail: users@tomcat.apache.org >> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org >> For additional commands, e-mail: users-help@tomcat.apache.org >> >> >> --No virus found in this incoming message. >> Checked by AVG Free Edition. >> Version: 7.1.405 / Virus Database: 268.11.7/435 - Release Date: >> 8/31/2006 >> >> > > > --------------------------------------------------------------------- > To start a new topic, e-mail: users@tomcat.apache.org > To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org > For additional commands, e-mail: users-help@tomcat.apache.org > --------------------------------------------------------------------- To start a new topic, e-mail: users@tomcat.apache.org To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org For additional commands, e-mail: users-help@tomcat.apache.org