tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pid <>
Subject Re: Error 503 ocurring when server under load
Date Thu, 07 Oct 2010 21:40:14 GMT
On 07/10/2010 18:31, André Warnier wrote:
> Rob G wrote:
>> Hey all,
>> Recently migrated a production site (mixture of Servlets and JSPs)
>> from Oracle Application Server to Apache/Tomcat. Since then we have
>> seen numerous HTTP Error 503 - Service unavailable errors at peak
>> times when site is under load. mod_jk.log has the following error
>> message(s):

OK.  Is there anything else different apart from the Servlet container?

>> [2184:1952] [error] jk_lb_worker.c (1473): All tomcat instances
>> failed, no more workers left

Seems like your Tomcats are maxed out.

>> I'm looking for help in trying to tweak settings to prevent this, or
>> confirmation that I've configured the setup correctly.

>> Platform:
>> Windows Server 2003 SP2
>> Setup:
>> Two tomcat instances with a single Apache front end, all on the same
>> server

Why do you have two Tomcat instances?  (It's not a trick question, I'm
interested in your reasoning.)

>> Versions
>> Tomcat: 6.0.24
>> Apache: 2.2.16
>> mod SSL: 2.2.16
>> Open SSL: 0.9.8

There's newer OpenSSL available, with important security fixes, if I'm
not mistaken.

>> mod_JK:1.2.30
> Your configuration looks very clean to me (no unnecessary settings
> etc.), which in this case is a plus (a good base to start tuning).

Was there an attachment I didn't see?

> You may want to upgrade Tomcat to the latest version (6.0.29).


> But before you start tuning, you should get some idea of what is
> actually going on.


> For example, at the moment these errors happen, what are these Tomcats
> really doing ?
> Are they really busy each processing 200 requests, with 200 threads
> running and actually doing something ? (200 is the default for the
> "maxThreads" attribute of the AJP Connector).

Q: How many threads & server instances did you have before?

> If yes, then you may just need a leaner application, or a bigger system
> (more RAM, faster CPU), or more systems.  What does the Task Manager
> tell you about the total system load ?

What were your Java -Xmx etc settings before, and now?

> If not, and many of these threads are waiting, then you may have an
> issue with a keepAlive that is too long.

Or several other things.

What is your Connector config?

Are you using an Executor?

Is there a database behind this, if so, what are the DataSource pool
size settings?

> See, for the
> connectionTimeout and keepAliveTimeout attributes.
> Whatever you do, first get an idea of the starting situation.  Then
> modify one setting at a time, and observe (and note) the effects.

Thread dumps from a maxed out Tomcat will tell you what each Thread is
waiting for.  Collect a series of these during high load periods to find
out what's happening.

Enable JMX and check the Connector, (Executor if enabled), and
DataSources; I'd be looking at backlog of requests, active + idle pool
members, total pool size.


> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

View raw message