portals-jetspeed-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shan Gopalakrishnan <sgopa...@cisco.com>
Subject Re: Jetspeed Performance tips
Date Fri, 21 Mar 2003 20:29:03 GMT
Santiago

Another useful information.  We set the heap size to 750meg.  The users 
load tested are 100 users with a think time of 30sec to 60secs
We see the major GC starts when the heap is full and clears  550meg of 
memory with live objects using 200meg. with in a span of another
5 secs , we see the remaining 550meg is again used and the major GC is 
triggered.  the response time suffers during the major GC running period.

Hope this helps

- Shan


At 07:38 PM 3/21/2003 +0100, Santiago Gala wrote:
>Shan Gopalakrishnan wrote:
>>We observed few more things.  I'm posting this to the best interest to 
>>mature the framework further than what it is currently today.
>>Definitely this is not criticism.
>
>Feel free to criticize. It is the only way we have to improve Jetspeed.
>
>>We are seeing the Garbage collector (Major GC) getting triggered 
>>frequently as I think huge set of objects are getting created.
>
>Some profiling could be handy. AFAIK:
>
>Wrappers for security are not recycled (they *must* be inmutable)
>PortletSets are created new for every request, due to threading issues.
>--> Both kinds of objects should be claimed from "young" space, unless we 
>are missing some "nullification" in permanent objects.
>
>The registries are reloaded every 5 minutes, meaning a whole set of Castor 
>marshalling and new objects going on (and fading) happens. If you don't 
>have registries that change a lot, you could make this interval greater.
>
>Every time DiskCacheDaemon runs, it will call xerces and xalan to 
>transform RSSPortlets.
>
>Every 30m you will have tomcat sessions expiring. If your testing does not 
>keeps the sessions, you will be filling a lot of memory with fake sessions 
>which will never be reused. I imagine you are using a testing framework 
>that keeps sessions for a number of hits per "user".
>
>A lot of other objects could come from either Velocity ASTs, Torque peers 
>or other places. Even from your portlets ;-)
>
>About Spring 2001 some profiling and tuning was done, by the IBM team, 
>David, Raphael and myself. I don't know of other such effort since then. 
>This led to rundata and some other objects pooling in turbine, and some 
>changes of String concatenation to StringBuffer.append(). We also 
>discovered a lot of duplicated initializations and similar stuff. It is 
>always funny to see how big bugs can hide in code.
>
>But the behaviour you are seeing looks more like the bug I pointed below 
>or a bug in the hotspot compiler. Could you switch temporarily to jdk 
>1.3.1_07 or 1.4.0_XX and see if the same happens? (According to reports in 
>cocoon-dev, 1.4.0 does not have the StringBuffer bug)
>
>>This could be inherited from Turbine implementation.  We are now playing 
>>with the various options in the Garbage Collector
>>part of the JDK 1.4.1, which is running the GC parallel, setting high 
>>heap size,  giving some % for young generation Vs old generation etc.
>>Most likely to my knowledge the minor GC looks only the younger 
>>generation area as we didn't see any benefits in increase the
>>% of younger generation.  Apparently the spike still remains and gives 
>>poor response time just that the symptom happens way early
>>or later during the tests depending on the heap size and GC configuration.
>
>If the problem is that some "bunches" of objects get into "old" space 
>because they are pointed by persistent objects, then a juditious ammount 
>of "= null" on recycling can help a lot. I've just checked 
>DefaultJetspeedRunData and it is disposed properly (unless something is 
>broken down in Turbine and it is not really disposed).
>
>>- Shan
>>At 12:43 PM 3/21/2003 +0100, Santiago Gala wrote:
>>
>>>Santiago Gala wrote:
>>>
>>>>Shan Gopalakrishnan wrote:
>>>
>>>
>>>(...)
>>>
>>>>>Has any one done similar  tests, did you observe something like that?
>>>>>Do you have any suggestions where to look further?  This is on Solaris

>>>>>8 with JDK 1.4.1_01, Tomcat 4.1.18 and Jetspeed 1.4b3.
>>>
>>>
>>>http://developer.java.sun.com/developer/bugParade/bugs/4724129.html
>>>
>>>may well be related. A very scary bug, BTW, which can explain why I'm 
>>>seeing ant or maven builds hung forever and claim all memory in my 
>>>machine. Ans also server VM crashes with OutOfMemory after tomcat 
>>>context reloading.
>
>
>--
>Santiago Gala
>High Sierra Technology, S.L. (http://hisitech.com)
>http://memojo.com?page=SantiagoGalaBlog
>
>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: jetspeed-user-unsubscribe@jakarta.apache.org
>For additional commands, e-mail: jetspeed-user-help@jakarta.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: jetspeed-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jetspeed-user-help@jakarta.apache.org


Mime
View raw message