Return-Path: X-Original-To: apmail-tomcat-dev-archive@www.apache.org Delivered-To: apmail-tomcat-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A05873614 for ; Wed, 4 May 2011 16:34:04 +0000 (UTC) Received: (qmail 94627 invoked by uid 500); 4 May 2011 16:34:03 -0000 Delivered-To: apmail-tomcat-dev-archive@tomcat.apache.org Received: (qmail 94571 invoked by uid 500); 4 May 2011 16:34:03 -0000 Mailing-List: contact dev-help@tomcat.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: "Tomcat Developers List" Delivered-To: mailing list dev@tomcat.apache.org Received: (qmail 94562 invoked by uid 99); 4 May 2011 16:34:03 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 04 May 2011 16:34:03 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=5.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [72.22.94.67] (HELO virtual.halosg.com) (72.22.94.67) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 04 May 2011 16:33:58 +0000 Received: (qmail 30235 invoked from network); 4 May 2011 11:33:37 -0500 Received: from c-98-245-245-160.hsd1.co.comcast.net (HELO ?10.1.10.166?) (98.245.245.160) by halosg.com with (AES256-SHA encrypted) SMTP; 4 May 2011 11:33:37 -0500 Message-ID: <4DC17FE3.5050502@hanik.com> Date: Wed, 04 May 2011 10:33:39 -0600 From: Filip Hanik - Dev Lists User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.17) Gecko/20110414 Thunderbird/3.1.10 MIME-Version: 1.0 To: Tomcat Developers List Subject: Re: BIO performance issues References: <4DC05F39.2060102@apache.org> <4DC16E0E.6050302@hanik.com> <4DC176C5.5060906@apache.org> In-Reply-To: <4DC176C5.5060906@apache.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 5/4/2011 9:54 AM, Mark Thomas wrote: > On 04/05/2011 16:17, Filip Hanik - Dev Lists wrote: >> On 5/3/2011 2:02 PM, Mark Thomas wrote: >> In a similar fashion, we can also craft a test run that will yield a >> substantial improvement over the old implementation in throughput. >> So there is a test case to prove every scenario. > Could you outline what a test case looks like. It would help with the > general understanding of what problem maxConnections is trying to solve. ok, we have an acceptor thread(AT) that does ServerSocket.accept() in an endless loop. In the previous implementation, the AT would accept the socket, then wait for a thread to become available to handle the connection. New incoming connections would then be handled by the backlog in the operating system. The old implementation was extremely unfair in how it handled the requests, some requests could get handled right away, while others could wait for long period of times (this is with the old impl). As you may be familiar, a client's connection may "die" in the backlog at which point the client has to attempt a new connection. if you really want a simple test case, then do maxThreads=200 clients=200 keepalive=on In the old impl, keep alive would be turned off and performance would suffer, even though the system has plenty of resources to handle it. while this test case is very narrow and simple, it's the other extreme of the use case you presented. The new implementation, queue based, was as a result to be able to disconnect a thread from a socket due to the new async requirements. Previously, a thread was married to a socket for as long as the socket was alive. Anyway, with the new implementation, just like with NIO, there is no longer a stopper on the acceptor thread (AT) it will happily keep accepting connections until you run out of buffer space or port numbers. This presents a DoS risk, this risk has existed in NIO for a while. So maxConnections has been put in place to stop accepting connections, and push back new connections into the backlog. so maxConnections exists to stop the acceptor thread from taking in more than it can handle. >> Here is what I propose, and you'll see that it's pretty much inline with >> what you suggest. > Yep. That works for me. I do have some additional questions around > maxConnections - mainly so I can get the docs right. > >>> c) remove the configuration options for maxConnections from the BIO >>> connector >> I think you still misunderstand why maxConnections is there, at some >> point you need to push back on the TCP stack. > Some more detail on exactly the purpose of maxConnections would be > useful. The purposes I can see are: > - limiting connections since the addition of the queue means they are > not limited by maxThreads correct, a system with maxThreads=200 should be able to handle connections=500 with keep alive on and perform very well. > - fair (order received) processing of connections? correct. almost no clients use pipelined requests, so chances that there is data on a new finished request is very slim. It is more probable that there is data on a request that was finished earlier in the cycle. I hope that explains it. And by default, with the config options/defaults I suggested, you'll get the exact behavior of the old connector, but can still benefit from the new connector logic > - ? > > Mark > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org > For additional commands, e-mail: dev-help@tomcat.apache.org > > > > ----- > No virus found in this message. > Checked by AVG - www.avg.com > Version: 10.0.1321 / Virus Database: 1500/3615 - Release Date: 05/04/11 > > --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org For additional commands, e-mail: dev-help@tomcat.apache.org