geronimo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jay D. McHugh" <jaydmch...@gmail.com>
Subject Re: Runaway garbage collection on G 2.2.1
Date Wed, 07 Sep 2011 19:43:02 GMT
Hello all,

Just in case someone else runs into a similar situation...

I was able to figure out what was happening - just not why.

In order to simplify and standardize how our JSP pages are built, we 
created custom JSTL tag objects for all of the input field types that we 
use (input type: text, button, etc).  This has worked well for us for 
several years.

About three weeks ago, our database hit some critical mass size such 
that some pages went into a runaway cycle of creating and destroying the 
tag objects.  That triggered a corresponding cycle of garbage collection 
(over 400 garbage collections per minute).

And pages that previously took a fraction of a second to be served began 
taking as long as 5 minutes.

By tuning the memory allocated to the VM I was able to get the time cut 
down to about 30 seconds (still way too long).

After I figured out that it was the tag processing that was causing the 
trouble and replaced all of the tags with plain HTML, the time dropped 
back to what it was previously (a fraction of a second).

The strange part of all of this is that I can restore an old copy of my 
database (from before the problem began) and the old JSTL version of the 
page loads fine.  But the current data causes the JSTL version to explode.

I cannot figure out how the amount of data I have would affect the JSTL 
processing.  Or, why other pages in my system continue to work fine even 
though they still have the JSTL tags in them (I have only changed the 
problem pages so far).

If anyone has any ideas, I'd be happy to hear them.

In the mean time, I am going to see if I can put together a test case 
that duplicates the problem.

On 08/18/2011 04:07 PM, Jay D. McHugh wrote:
> Hello all,
>
> My app which is bundled as an EAR consisting of an EJB jar and a WAR
> file has been running quite happily in Geronimo for years (it has been
> in continuous development the entire time).
>
> But yesterday, it suddenly started to flake out running through a
> tremendous number of garbage collections when attempting to open some
> (but not all) of the jsps in the WAR file. There are so many collections
> occurring that it sometimes takes several minutes before the jsp gets
> processed and sent to the browser.
>
> There is no useful logging that happens so it appears that the problem
> is occurring somewhere in Tomcat.
>
> I do have a filter that is configured, but when I put some logging into
> that - it was not hit until after the GC looping finished.
>
> Does anyone have a suggestion as to where I could look to figure out
> what might be going on?
>
> This is running on an Ubuntu Linux 10.04.3 64bit machine with 6Gb of
> physical memory and two dual-core hyperthreading processors (so it looks
> like 8 cores to the OS).
>
> Here is my JAVA_OPTS variable:
>
> JAVA_OPTS=-Xms1024m -Xmx1024m -XX:MaxPermSize=256m -XX:NewSize=64m
> -XX:MaxNewSize=128m -verbose:gc -XX:+PrintGCDetails
> -XX:+HeapDumpOnOutOfMemoryError
>
> With the exception of the garbage collection flags and the 'dump on
> error' parameters - I have been running with this set up forever.
>
> Is it possible for the number of classes in my EAR to cause a problem?
>
> I am pretty much at my wits end - but I have to fix it because it is on
> my production server and all of my attempts to set up a new system have
> the same problem.
>
> Thanks in advance for any hints,
>
> Jay

Mime
View raw message