cocoon-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From footh <fo...@yahoo.com>
Subject Re: Javaflow - major memory issue: more info
Date Thu, 20 Mar 2008 19:14:06 GMT
OK, so I did a lot more digging into the Javaflow issue.  This time I would ram the server
with
the load tester, and then wait to see how the continuations were cleaned up.  After running
a
bunch of tests, I found out the they were cleaned up in a very regular manner.  For example,
I
would run 1000 samples, the Tomcat memory would shoot up to a point, then I'd wait 10 minutes
and
the continuations would for the most part clean up but Tomcat memory would stay the same.
 Then I
ran 1500 samples, and Tomcat memory would remain stable until I hit somewhere around the 1000th
sample and then the memory would start going up again.  After waiting 10 minutes, the memory
appeared to clear back to the baseline (Tomcat still staying at the new 1500 sample total).
 The
cycle continued on 2000 samples where Tomcat total memory wouldn't go up until hitting around
1500
samples, the memory clearing after 10 minutes, etc.

So, it appears that the continuations eventually clean up nicely.  Of course, a constant load
would  kill the system as the continuation clean-up is too slow to keep up.  Two things to
note:

1) The expiry parameter in cocoon.xconf did not work.  It was always 10 minutes no matter
what I
set it to.

2) The bottom paragraph of this page:  http://cocoon.apache.org/2.1/userdocs/flow/using.html
states that when using the sendPage method, no continuation is created and memory resources
are
not used.  This does not seem to be the case as my test case uses a one-line flow with a sendPage
call.

Back to my specific case, I then went to test my full-blown application.  After running a
series
of tests similar to the one described above, I discovered an area that appears to be a problem.
 I
have a main application that uses a "primary" javaflow, and a sub-application of the main
app that
needs the general logic in the primary flow and then its own logic in its own flow.  So, it
runs
through two javaflows and thus two sendPage calls.  To make a long story short, this seemed
to
cause a memory leak.  Running just the main flow seemed ok, running just the sub-flow worked
ok as
well (there appeared to be a bit of a leak, but inconclusive).  However, running a page through
both flows showed a clear loss of memory.  The continuations did not clean up.

So, this could very well be the source of my problems.  Based on note 2) above, I must've
figured
this would be OK due since sendPage supposedly doesn't have a large memory footprint.  However,
it
appears to cause a problem based on my tests.  I'm hoping the experts can chime in here: 
is this
as bad an idea as it appears to be (running one page through multiple flows)?

-JF

--- Antonio Gallardo <agallardo@agssa.net> wrote:

> footh escribió:
> > Ok, so I applied the patch supplied by cocoon-2109.  The results appear to be the
same.  After
> 10K
> > samples from the load tester, the tomcat memory was virtually identical from before
and after
> the
> > patch.  Then, I ran the load tester with the profiler attached.  With the slower
page response
> > time (due to the profiler) the memory *seemed* to move up slower with the new code
but that is
> by
> > no means a scientific obeservation.  In the end, the memory still increased.
> >   
> I think this is expected, because if the continuation did not expire 
> (timeout), then the result is the same (with and without the patch). The 
> difference is that with the patch, all the expired continuation should 
> be garbage collected.
> 
> For testing it would be worth to set to a lower value the continuation 
> expiration time. I think it is in cocoon.xconf.
> 
> > Upon examining the profiler data, there was a difference however.  Instead of the
> > ContinuationsManagerImpl (CMI) having the greatest retained size, it was a bunch
of HashMap
> > objects at the top of the list.  The one CMI object was in second place (63% on
this run) but
> > about 19K HashMap objects were responsible for 92%.  I assume this is because the
patch
> changes a
> > variable in CMI to a HashMap and so the retained size of the CMI object gets lopped
into the
> > HashMap objects whereas pre-patch this wasn't the case.
> >   
> I cannot find a newly introduced HashMap in the patch:
> https://issues.apache.org/jira/secure/attachment/12363582/ContinuationsManagerImpl.java.patch
> > The real question I have is, why is the CMI object retained size so large?
> It has to keep the whole running environment to restore it when we call 
> again the same continuation. However, it may be worth to review the 
> memory size per continuation.
> 
> >   As the load testing
> > continues, the memory retained by CMI dominates the rest of the objects.  If the
load stops,
> it is
> > eventually cleaned up, but by then it may be too late.
> >
> > If there are any other suggestions, I'm willing to put more effort into figuring
this out.
> >   
> Thanks.
> 
> Best Regards,
> 
> Antonio Gallardo.
> 
> >
> > --- footh <footh@yahoo.com> wrote:
> >
> >   
> >> Ok, I'll give the patch a shot.
> >>
> >> I'm using Tomcat version 5.5.26.  Concerning the
> >> store-janitor values, I haven't changed them from the
> >> default.
> >>
> >> In fact, as I stated in my first post, the problem
> >> occurs even on the sample javaflow calculator
> >> application (relative url: 
> >> /cocoon/samples/blocks/javaflow/calculator.do).  Try
> >> hitting that page a bunch of times with a load tester.
> >>
> >>
> >> --- Antonio Gallardo <agallardo@agssa.net> wrote:
> >>
> >>     
> >>> Hi footh,
> >>>
> >>> Testing the patch is a good start. Would you provide
> >>> tomcat version and 
> >>> the parameters you use to start it?
> >>>
> >>> How do you configure cocoon.xconf, in special the
> >>> values for:
> >>>
> >>> <store-janitor logger="core.store.janitor"/>?
> >>>
> >>> Best Regards,
> >>>
> >>> Antonio Gallardo.
> >>>
> >>> footh escribió:
> >>>       
> >>>> Thanks for all the replies.  I did some more
> >>>>         
> >>> digging
> >>>       
> >>>> into the profiling data.  It turns out that the
> >>>> ContinuationsManagerImpl is at the top of the
> >>>>         
> >>> object
> >>>       
> >>>> path of the byte arrays where 
> >>>>
> >>>>         
> >> org.apache.cocoon.environment.util.BufferedOutputStream
> >>     
> >>>> is the actual parent of the arrays.
> >>>>
> >>>> Looking down the path, I can see the TreeSet and
> >>>> SortedSet that is mentioned in cocoon-2109.  I
> >>>>         
> >>> would
> >>>       
> >>>> say this issue is a likely cause for the memory
> >>>> ballooning.
> >>>>
> >>>> The load tester is simulating 5 users staggered 10
> >>>> seconds apart and only hitting a couple very
> >>>>         
> >>> simple
> >>>       
> >>>> pages.  Yet within 15 minutes, the Tomcat memory
> >>>>         
> >>> use
> >>>       
> >>>> approaches 1GB.  After stopping the load tester
> >>>>         
> >>> and
> >>>       
> >>>> examining the memory, I did notice in the profiler
> >>>> that the byte arrays eventually cleaned up. 
> >>>>         
> >>> However,
> >>>       
> >>>> looking at the Task Manger (using Windows), Tomcat
> >>>>         
> >>> was
> >>>       
> >>>> still holding the full amount of memory from when
> >>>>         
> >>> I
> >>>       
> >>>> stopped the load tester, ie. it didn't go down
> >>>>         
> >>> when
> >>>       
> >>>> the  byte arrays cleaned up.
> >>>>
> >>>> I suppose the next step is to try the patch
> >>>>         
> >>> provided
> >>>       
> >>>> in  2109.  Any other suggestions?
> >>>>
> >>>>   
> >>>>         
> >>>       
> >> ---------------------------------------------------------------------
> >>     
> >>> To unsubscribe, e-mail:
> >>> users-unsubscribe@cocoon.apache.org
> >>> For additional commands, e-mail:
> >>> users-help@cocoon.apache.org
> >>>
> >>>
> >>>       
> >>
> >>       ____________________________________________________________________________________
> >> Never miss a thing.  Make Yahoo your home page. 
> >> http://www.yahoo.com/r/hs
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscribe@cocoon.apache.org
> >> For additional commands, e-mail: users-help@cocoon.apache.org
> >>
> >>
> >>     
> >
> >
> >
> >       ____________________________________________________________________________________
> > Be a better friend, newshound, and 
> > know-it-all with Yahoo! Mobile.  Try it now. 
> http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@cocoon.apache.org
> > For additional commands, e-mail: users-help@cocoon.apache.org
> >   
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@cocoon.apache.org
> For additional commands, e-mail: users-help@cocoon.apache.org
> 
> 



      ____________________________________________________________________________________
Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@cocoon.apache.org
For additional commands, e-mail: users-help@cocoon.apache.org


Mime
View raw message