tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rainer Jung <rainer.j...@kippdata.de>
Subject Re: apache getting in "sending reply" state when connecting to tomcat
Date Thu, 30 Aug 2007 17:39:09 GMT
Gerhardus.Geldenhuis@gta-travel.com wrote:
> Hi
> I'm going to be a real pain, but it make no sense now...

Let's see :)

> The email has been a team effort in our offices. We have included some
> diagrams to help illustrate our understanding or lack off.
> 
> Using a simple example:
> 
> 1/ Assume I have one httpd server (prefork) that can spawn a maximum of
> 200 children (through httpd Maxclients directive).
> 
> 2/ Assume I have 1 tomcat servers that can handle 200 threads each.
> 
> If I connect the apache to tomcat with mod_jk (lb) I can, in theory
> handle 200 concurrent connections.
> 
> Now, if I change the figures
> 
> 1/ Assume I have one httpd server (prefork) that can spawn a maximum of
> 200 children (through httpd Maxclients directive).
> 
> 2/ Assume I have 4 tomcat servers that can handle 200 threads each.
> 
> In this case each apache child opens a connection to each tomcat server
> so I have reached the maximum amount of connections each tomcat can
> handle. What I cannot understand is that by increasing the tomcats to 4
> I now have 800 possible connections but with the above config I can only
> access 200 of them. If I set apache to 800 (through httpd Maxclients
> directive) I will open more connection to each tomcat than they can
> handle.
> 
> Is the above senario correct? and if it is then we are not getting more
> throughput by adding more tomcats and it would be better to access the
> tomcats directly.

Your considerations are correct. Since you can't influence, which apache 
httpd process handles requests for which Tomcat instance, and since they 
don't share the connections, this design doesn't scale to a huge farm 
with a simple 1:N (1 httpd, N>>= Tomcats) setup.

What can you do:

0) For a relatively small farm the problem is usually not very big, 
because if there is high load, the costly ressource is the CPU power, 
and not the memory and switching overhead for having to many threads.

1) You can use the APR connector for Tomcat. This will decouple the 
thread from the connection, as long as there's no request on it. That 
way you'll only need threads for the real request paralellism concerning 
each backend Tomcat. the number of connections will stay high though, so 
you can't scale to a hundred of Tomcats with a thousand connections each..

2) You can use the worker MPM, because there a configurable number of 
threads share the same connection pool. For N Tomcats, on average only 
Threads_per_Process/N requests will need a connection for one Tomcat 
instance. Of course in reality the number will be higher, but for bigger 
N and enough thread per process you should notice a relevant decrease in 
connections. Maybe not by 1/N but something like 2/N, depending on how 
much session affinity breaks ideal balancing. If you get close to some 
factor C/N for a not to large constant C, you are back in the scaling 
busyness.

3) For huge designs you'll need to partition it into M:N (M apache 
httpd, N Tomcat), where the quotient N/M doesn't get to big.

4) If your balancing breaks - or much more likely - if something in your 
system gets slow then your expectation concerning the parallelism get 
wrong. You can't fix that without fixing the slowness reason. What is 
important though, is to configure the idleness timeouts for the Tomcat 
thread pool and the jk connection pool, such that when the original 
reason of slowness is gone, the connections and threads get down below 
the critical level.

The situation you experienced was most likely coming from a slowness in 
the backend applications (remember the need for a Java thread dump?). 
Then any throughput system will soon get filled up from the back to the 
front. The best you can do, is to answer the overload requests quickly 
with an error, such that the backend systems have a chance to get stable 
again. For this you need Timeouts and other load limitating configurations.

When doing sizing considerations, you always need to be clear for 
yourself, if you are talking about the normal situation, or if you are 
trying to find out, what will happen during times of overload.

> So using a ridiculous example, if you have 100 tomcat boxes connecting
> to one httpd server. The the limit for amount of spawned children would
> still only by 200. Even though you should be able to handle 100x200
> concurrent connections. Even if you take into account that for each
> request per second received the request will take 4 seconds to process
> it still does not seem effective use of the tomcat resources.


> A few other resulting questions:
> If child1, child2, child3 etc each have a connection to each tomcat,
> does each child also do its own load balancing or do all the children
> share information to do loadbalancing?

Fortunately they share the information state about the balancing. This 
was introduced about 10 JK releases ago by means of a shared memory segment.

You could ask, why the processes don't share the connection pool. They 
might do this some time in the future. Historically the pool came before 
the shared memory for balancing. We prefer to stabilize JK 1.2.x now and 
start working on a major next release. Switching to a shared pool will 
likely lead to a couple of releases with a couple of bugs, so I don't 
expect that to happen in 1.2.x.

Regards,

Rainer




---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Mime
View raw message