tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From André Warnier>
Subject Re: Please help diagnosing a random production Tomcat 7.0.53 Internal Server Error!
Date Tue, 15 Apr 2014 20:17:33 GMT
Christopher Schultz wrote:
> Hash: SHA256
> Ian,
> On 4/15/14, 3:33 PM, Ian Long wrote:
>> Thanks for the reply.
>> It looks to me like tomcat just gave up partway through generating
>> the request, I’m trying to figure out why.
>> There are no exceptions in either my application logs or the tomcat
>> log itself, which is frustrating.
> Definitely. You checked catalina.out (or wherever stdout goes) as well
> as your application's logs?
>> Thanks, I’ll look into the executor.
>> Apache matches what is set in my connector:
>> <IfModule prefork.c> StartServers       8 MinSpareServers    5 
>> MaxSpareServers   20 ServerLimit      800 MaxClients       800 
>> MaxRequestsPerChild  0 </IfModule>
>> Yes, the connector settings should be fine, there are usually less
>> than 20 httpds.
> You mean 20 httpd prefork processes, right? That should be fine: it
> means you will need 20 connections available in Tomcat.
>> Forgot to mention that it looks like tomcat returned around 50% of
>> what the page should have been, before it hit the Internal Server
>> Error.
> Have you run out of memory or anything like that?

I was going to ask the same thing, slightly differently.

I can think of a scenario which might result in the same kind of symptoms, only I am not 
sure if it makes sense, Java-wise.

A request is recived by httpd, which passes it to Tomcat via mod_jk.
Tomcat allocates a thread to handle the request, and this thread starts running the 
corresponding application (webapp).  The webapp starts processing the request, produces 
some output, and then for some reason to be determined, it suddenly runs out of memory, 
and the thread running the application dies.
Because Tomcat has temporarily run out of memory, there is no way for the application to 
write anything to the logs, because this would require allocating some additional memory 
to do so, and there isn't any available.
So Tomcat just catches (a posteriori) the fact that the thread died, returning an error 
500 to mod_jk and httpd.
As soon as the offending thread dies, some memory is freed, and Tomcat appears to work 
normally again, including other requests to that same application, because those other 
requests do not cause the same "spike" in memory usage.

Tomcat/Java experts : Could something like this happen, and would it match the symptoms as

described by Ian ?

And Ian, could it be that some requests to that application, because maybe of a parameter

that is different from the other cases, could cause such a spike in memory requirements ?

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message