tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Leon Rosenberg <>
Subject Re: What is the healthy interval length for young gc?
Date Mon, 03 Jan 2011 14:50:54 GMT
Hello Chuck,

On Mon, Jan 3, 2011 at 3:23 PM, Caldarale, Charles R
<> wrote:
>> By increasing the space on the new gen alone, you may make Minor
>> GC's less frequent.
> And by specifying fixed sizes for generations, you can upset the balance between old
and new, and prevent the GC logic from adjusting the ratios for the current workload characteristics.

Generally, yes. But I cannot increase the old gen anymore, because it
will slow down the gc by another 10 seconds (we got 10 seconds from
last increase by 2GB) and this will let all the tomcats to drop out of
the lb pool. So I only have the option to increase the 'new' space,
which I'm investigating.

>> I do not think that the fact of copying the objects from the new
>> gen to the old gen "causes the objects to live too long".  They
>> live long because they are still being referenced somewhere and
>> thus cannot be "forgotten".
> This part is true.

Well that depends on the definition of 'long' isn't it? Usually I
would expect (and hope) that all request-bound objects (beans,
modelmap parts, tag instances, byte arrays) gets collected with the
next minor gc run after the request is finished. I have the feeling
that this is not the case.

> Given the relatively short duration of the minor GC operations relative to the observed
response time, I suspect that something else is going on during the period that may be causing
a spike in heap usage and the slowdown - but that can't be proven without more data about
exactly what's going on.  Looking at the heap usage visually will help: are you seeing a
regular sawtooth pattern, or does heap usage remain fairly flat and then suddenly spike?

A regular sawtooth which gets more intense in peak times.
Unfortunately we see a lot of other indicators increase (used threads
near limit, number of apache processes increases, eth traffic
increases and so on). It's hard to say which one is symptom and which
one is the root.
The most irritating thing of all is that the "commited" memory of the
linux vm is making a huge (12GB) leap to the top. This one I can't
explain at all ;-)


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message