tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From André Warnier ...@ice-sa.com>
Subject Re: MaxClients and maxThreads
Date Tue, 24 Sep 2013 06:08:39 GMT
mohan.radhakrishnan@polarisFT.com wrote:
> Yes. That is probably the capacity planning part that involves think time 
> analysis and concurrency.
> 
> What Were They Thinking:
> Modeling Think Times for Performance Testing
> Tom Wilson
> 
> from Computer Measurement Group is what I plan to refer to. But don't know 
> yet how to mine this from awstats.
> 
> The Redhat link describes it like this
> 
> MaxClients( 300 ) /  ThreadsPerChild( 25 ) = Processes( 12 )
>         mod_jk default connection pool
> Each worker has a connection pool and by default the connection pool size 
> is equal to ThreadsPerChild( 25 )  
> In the default case each worker has 25 connections multiplexed over 12 
> processes equaling 300.   Two workers will have  300 x 2 =600
> connections to Jboss.
> 
> But I don't understand how one core with 2 hardware threads can support 
> 200 threads. I don't get that calculation. 

And you are right in not understanding it, because there is no such calculation. 200 
threads doing what ? printing "Hello World" or calculating the GDP of China ?

The problem is that when I draw
> a throughput graph using think time analysis and concurrent connections 
> estimate I have to use 800 threads for a 4-core system if we have only 
> Apache there. 
> 
Why do you not just forget about the cores. They are not really relevant here.
For the last 30 years, computers have been doing time-sharing between multiple processes.

That means that the same CPU can handle multiple processes running "at the same time".
Having more than one core just means that the same CPU can, at certain times, be 
processing more than one task at the same time.  For all practical intents and purposes, 
it is basically the same as having one core that is twice (or 3, 4 times) as fast.

What is important here is how many client requests your chosen architecture can process in

any chosen amount of time.  And that depends for 90% on your application(s).
So take the default values for everything (because at least they are not severely 
unbalanced), and *measure* how your system is doing under load.  If you are statisfied, 
leave it at that and do the same on another system.  If you are not satisfied, /then/ is 
the time to start looking deeper. But then you'll be doing it with some basic numbers to 
compare against, and not in the dark like now.



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Mime
View raw message