tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Eggers <>
Subject Re: Help in diagnosing server unresponsiveness
Date Sun, 03 Feb 2013 05:59:37 GMT
On 2/2/2013 8:07 PM, Zoran Avtarovski wrote:
> Thanks Miguel,
> This is what I also suspect, but I can't see any evidence. The server has
> gone 10 days under heavy loads without a glitch and then it will hang a
> couple of times in the next few days with no apparent rhyme or reason.
> Z.
> On 3/02/13 5:56 AM, "Miguel González Castaños"
> <> wrote:
>> On 01/02/2013 20:08, Christopher Schultz wrote:
>>> Hash: SHA256
>>> Zoran,
>>> On 1/31/13 8:36 PM, Zoran Avtarovski wrote:
>>>> We have a application running on the latest Tomcat7 and we are
>>>> getting a server crash or becoming unresponsive. This occur every
>>>> few days at no fixed intervals or time of day and they certainly
>>>> don't correlate to any app function ­ at least not according to the
>>>> logs.
>>> Can you describe the "crash" in more detail? OOME? IF so, what kind
>>> (heap or PermGen)? Lock-up (deadlock, etc)? Actual JVM crash (produces
>>> a core dump or native stack dump)?
>> I would go in that direction too. Enable logs and core or stack dumps
>> and analyze them. Be sure you are not restarting Tomcat in your crontab
>> (i had a server which was restarted once a week and masked some memory
>> starvation).
>> In my case I can tell you I had to end up disabling JaveMelody (it was
>> provoking some side effects in our webapp as not managing international
>> chars in the right way). If reports from Javamelody are not giving you
>> any clue, beware that Javamelody has its own memory overhead (not much
>> in your case but in my case it was around 200 Mb of heap in a 1 Gb
>> virtual server).
>> I followed Chris directions, I got stack dumps after a server crash and
>> analyzed it with eclipse analyzer. I realized our programmer decided to
>> load too many objects in memory than the server could cope with. So no
>> memory leak but bad memory management was the root cause.
>> Regards,
>> Miguel

I've sort of followed this thread. If I remember correctly, you've 
recently moved to Linux.

Here's an approach that might tell you what's going on at the time of 
the problem.

When you're experiencing the problem, if you can, get a full thread 
dump. For Linux, that means you send a -3 signal to the PID (kill -3 
PID). PID is the process ID of the Tomcat in trouble.

At the same time, either run ps -FL PID or run top -p PID and switch to 
watching threads (H).

See if you can see which thread is using 100% of the CPU from either PS 
or top.

Sadly, the thread ID in the thread dump is hex, while the thread ID in 
either of the above two methods is decimal. So you'll have to do a bit 
of digging. The number in the thread dump is nid=0xNNN. In top, the 
thread ID will be displayed as the process ID. In ps, the thread ID will 
be displayed as the LWP ID.

Hopefully, if you catch the Tomcat process at the right time, you'll be 
able to see which thread is consuming all the CPU, and from the thread 
dump see what the thread is doing.

. . . . just my (Saturday night) two cents

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message