tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From André Warnier>
Subject Re: when memory leak, tomcat can't accept requests?
Date Sat, 10 Jul 2010 11:14:15 GMT
Pid wrote:
> On 10/07/2010 10:19, jikai wrote:
>>> Because it's already saturated with requests?  No server has an infinite
>>> capacity.  How many threads did jstack report were running?
>>> Can you connect with JMX and see what state the connector is in?
>>> Are you using an Executor in combination with your Connector?
>> Thanks for your quick reply.
>> we didn't use Executor, and there is no JMX with tomcat. When beginning of
>> error occurs, there is more than 600 worker threads(most of them is
>> waiting), an hour later, there's only 50 worker threads left(I think because
>> of idle, threads were killed by pool), BUT tomcat still can't process
>> requests, nginx still report connection timeout for most request, I can't
>> understand why this happened?
> Neither can I with the information you've provided so far.  The thread
> dump you've posted doesn't have enough information in it to determine
> what's going on.
> Stick to using the JIO connector for now, and try using an Executor to
> manage the pool.
> Is your application dependant on a database?
> You didn't answer: are you storing the objects you referred to in the
> user session?
And maybe worth having a look at the connectionTimeout and keepAliveTimeout settings.
See definitions here :

By default, keepAliveTimeout is set the same as connectionTimeout (in this case 30 s).
That means that, after serving the last request of any given client, each thread is going

to sit waiting for a possible next request on the same connection, for up to 30 s.
That is ok if the pages you serve, typically, have lots of image links for example.  The 
keepAlive then allows the browser to request the embedded images using the same 
connection, since it is still alive.
But if your pages typically do not have such embedded objects, then it is finally a waste

of threads (and associated resources), who could be better used than just waiting for 
requests that will never come.

I personally find 30 s way too high for most cases. If a browser is going to request an 
inline image, it will do so within maximum a few seconds of receiving the original page.

KeepAliveTimeout applies to the time between the initial TCP connection setup by the 
browser, and the moment it consents to send its HTTP request over that connection.  I 
would think that unless you have *very* slow clients, a time of a few seconds should 
suffice.  Otherwise, you also open yourself to DoS attacks : a client makes a connection,

and then willfully waits before sending a request.  Enough of those and your server would

be paralysed.

Summary : what about setting both connectionTimeout and KeepAliveTimeout to "5000" (5 s) 
explicitly, and see how many waiting threads you still have then ?

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message