cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter Royal <>
Subject Re: Cocoon 2.0 Scalability Disappointment
Date Fri, 30 Nov 2001 18:55:20 GMT
On Friday 30 November 2001 01:22 pm, you wrote:
> This stack is a LinkedList to which EventData objects are appended. This
> means for each element, 2 objects are allocated : the EventData object,
> and a LinkedList node. We can change this stuff to use a single
> ArrayStack (from excalibur.collections) and no EventData object, which
> should significantly reduce CPU consumption.
> So my opinion is remove this costly stuff and forbid return statements
> in xsp:logic. This will make a speedy XSP engine.

Yes! How about making that an option in the xsp:page element? 
enable-error-handling or something as such.

>From a caching standpoint, I've removed everything dynamic out of my XSP 
documents so the results are cacheable. (I still like XSP because I can 
precompute static final java objects to stick in as Request attributes).

As such, I know for certain that I don't need the event cache that XSP keeps, 
the only code in the generate() function is calls to the SAX functions and 
some if's on variables that have already been defined.

> 0) before disabling logging, search for messages such as
> "decommissioning instance of...". This reveals some undersized pools
> which are corrected by tuning cocoon.xconf and sitemap.xmap. Undersized
> pools act like an object factory, plus the ComponentManager overhead.

Definitly do the above. That can literal kill performance of the server, 
especially if it turns into a factory for XSLT transformers *shudder*.

Also cache as much as you can. If you have a pipeline that is different each 
request, reorganize it so that any static pieces are done first so that bit 
can be cached. Pipeline organization is huge.

For some numbers (the initial ones are sad, I'm almost embarassed to admit 
them) from our tests:

 1 thread * 10 iter: 10s
 5 thread * 10 iter: 30s + --> i eventually gave up. pools turned into 
factories and it was ugly

Reorganized pipline (static pieces up front to utilize caching):
 1 thread * 10iter: 2s
 5 thread * 10iter: 4s

This is a webapp on an intranet so concurrency isn't as high as a public 
website might be. Each request also generates at least a single RMI call, 
which is a few hundred msec minimum.  I'm not done tuning yet, I just got it 
to the point of "good enough" so that other things to do have higher 
priority. My next point of attack will be to stick a profiler on the system 
to see where to focus on next. I know the RMI is an issue, and I plan to 
attempt to get Catalina running under Phoenix to fix that. The RMI calls are 
a few 100 msec's each so I'm sure that will help a bit.

peter royal ->

To unsubscribe, e-mail:
For additional commands, email:

View raw message