tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wade Chandler <>
Subject Re: Tomcat/JVM crashes on Linux
Date Mon, 20 Dec 2004 21:53:46 GMT
Greg Lappen wrote:
> What kind of load does your application handle?  I am not processing a 
> HUGE amount of requests, but we server about 6000 visitors a day, 15,000 
> pages.
> Greg
> On Dec 20, 2004, at 1:28 PM, Wade Chandler wrote:
>> Greg Lappen wrote:
>>> Hello-
>>> Has anyone had a problem with Tomcat 5.0.28 crashing on Linux with no 
>>> error messages?
>>> My production server running with JDK 1.4.2_06, RedHat EL 3.0 just 
>>> crashes, no core dump, no errors in catalina.out, no clues.  
>>> Sometimes it goes for days, sometimes it happens several times in one 
>>> day.  I am running the tomcat process behind Apache 2 with mod_proxy. 
>>>  Setting "ulimit -c unlimited" in the startup file still 
>>> did not produce a core file.
>>> If nobody else has experienced this, do you have any suggestions on 
>>> how to debug it further?
>>> Thanks,
>>> Greg
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail:
>>> For additional commands, e-mail:
>> I'm using the same setup as you less Apache2.  I use tomcat as the web 
>> server.  Using TC5.0.28 and JDK1.4.2_06, and I have yet to have the 
>> server crash once.  Not much help, but might give you some clues where 
>> to look.
>> Connector log (mod_proxy....assuming you mean you're using the new 
>> connector code) there anything in the Apache2 log?  I assume 
>> from your post you mean that the java process just completely goes 
>> away.  You might find (depending on the running directory of the java 
>> process running tomcat) a pid dump log file or something...not sure if 
>> the vm produces one of these or not.  You also might check in 
>> /var/log/messages file to see if for some reason the kernel or some 
>> lib got some error it logged.
>> Wade
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail:
>> For additional commands, e-mail:
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

No where that load on the system I wrote about though it stays up and I 
haven't rebooted the system or restarted the process in a good number of 
days.  It's only in the hundreds a day.

I have another application which runs over http using rpc with 
serialized classes, and it processes quite a bit of information, plus it 
spawns it's own threads.  It uses apache and/or iis as the front end. 
Though if it were up to me we would only be using tomcat.  We were using 
an ISAPI c++ application for everything at one time.  We are adding more 
and more functionality to it.  I'm sure it uses more process and memory 
resources as it will run backend import processes, a ton of logic 
processing, and report generation, and the pure nature of the 
application will have more hits a day with more fire power per hit than 
your web site.  In testing it hasn't crashed.

What are your memeory settings for your tomcat process?  If you don't 
give the process enough memory to do what it has to do it won't be able 
to behave correctly.  Though catalina.out should show you out of memory 
errors like that.  Have you used any testing environment to profile the 
system and gathered any information about the state of the machine when 
it will crash?  Have you been able to reproduce the issue with any valid 
results yet?  I'd be asking myself how best to do this.

You can write a simple application to test your web application, or you 
could purchase some software to hit a bunch of web pages.  Basically you 
can spawn a bunch of threads from a given machine randomly hitting 
different links and try to reproduce the issue if you don't have any 
real hard logic there to test.  Do you have any application logic in 
your site?  If so, do you perform any logging or anything of that 
nature?  You may be getting some exception you could have caught and 
logged yourself like an out of memory error before tomcat barfed out.

Something obviously has to be happening for the process to just go away. 
  Do you have to use Apache2 as a front end to the application?  If not 
see if you are able to produce the same issue using only tomcat.  Be 
sure you edit your memory settings for tomcat.  Don't try to run it in 
128MB of memory or something like that. ulimit is fine as far as linux 
goes, but you still need to be sure and not have any limits on the JVM. 
  For instance, the default value for -Xmx is 64 which means 64 mega 

Depending on how much memory you have on your computer you probably want 
to up this ( your case you definitely want to up this).  With 
that many hits you may very well be getting enough hits at the same time 
during the busy moments of the day to crash you out as I'm not sure how 
tomcat will behave if it is hammered and doesn't have any room to play. 
  You can add something like (granted you aren't already using -Xmx)

to your file so that you can give tomcat a gig of memory to 
run in.  This gives your java app that much memory to play around 
inside, and the JVM will use the other bit of memory that it can.  I 
assume you are on a 32-bit machine with a 32-bit jvm.  You might be able 
to even use -Xmx1800m if you have more memory.  The main thing is that 
the jvm will only use a max of 64MB by default for the java application 
(tomcat) unless you tell it it can use more.

I'd start with upping the process memory and seeing how it behaves.  If 
the memory of the jvm stays used.....then you have a memory leak some 
where.  Make sure you have the jsp compiler set to fork if you are using 
jsps that change a lot.  There have been postings about a memory 
profiler than can be setup in tomcat as well.  Search the archives for 
memory profiler/profiling and see if you can get it's name and links. 
It might help.

For something simple, you could even setup a cron job to run on your 
machine to watch the java process so you can get snap shots of just the 
system process memory over time as well (not as good as a profiler). 
Make the cron job run every 10 minutes or so, and append to the file. 
Some little checkers are simple to know whether or not you even need to 
bother with more gory details.  Heck run a cron job like that every 
minute (it won't be hard on the system).  Dump the results out to a file 
and later on when it crashes start looking over the dump and see if it 
only crashes when the process memory is a certain level.  In your job 
you can grep out only the java processes.


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message