cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Berin Loritsch" <blorit...@apache.org>
Subject RE: [Design] ContainerManager is under fire--let's find the best resolution
Date Fri, 07 Jun 2002 14:09:19 GMT
> From: Vadim Gritsenko [mailto:vadim.gritsenko@verizon.net] 
> 
> JDBC connection must be aware of pooling (special handling in the
> close() method), which does not look good - current Avalon 
> model where poolable component must not care about how it is 
> being pooled is *much* better.

The connection itself is not aware of it.  In fact you can use just
about any vendor's JDBC driver with the same DataSourceComponent.
The DataSourceComponent wraps the JDBC driver with a proxy--just like
the DataSource spec says.  It's not that difficult to do.


> > I am advocating the same thing for the CM
> 
> Do you propose to make pooling concern of the component? 
> (I.e., manual pooling as in JDBC?)


I am for supporting components up the the Per-Thread request model,
and anything that needs a Per-Lookup request model needs to be
redesigned.  This allows the container to be smart about the instances.
It can either use ThreadLocals to create the Components (too slow
on JDK 1.2 and before), or a finite number of components to share
between the threads.  The number of component instances really shouldn't
exceed the number of threads.

Consider Cocoon.

The Transformer is a Per-Lookup design.  This is too limiting.  Let's
assume we have a pipeline with 5 transformers.  That means the pool
for the transformer must be 5*Tn where Tn is the number of threads.
Even if we cut this to 70% of the value, Servlet engines can have
anywhere between 20 to 100 threads handling incoming requests.  So
we are talking at least 70 components and at most 500 used at any
one time.

Worse, certain optimizations on the usage of the artifacts become much
more difficult to assemble.  Certain things like caching mechanisms have
to be worked into the scheme of things across multiple component
instances
as opposed to each component type.  Furthermore, you cannot directly
substitute a cached resource for the original.

With the current state of affairs, it is very difficult to understand
why Cocoon uses 50-60 MB of RAM for even the simplest of systems.
Overpooling is an antipattern just as underpooling is.  We need to use
more judicious pooling mechanisms--ones that are smarter.  I highly
recommend looking at Fortress in Excalibur for a better pool mechanism.

Fortress can do everything ECM can do, and do it quicker and faster,
with self discovering pools sizes.  I highly recommend it.  Things like
releasing the components no longer slow down the critical path of
processing requests.  It is faster to start up, faster to use, but
slower
to completely shut down.  That's OK, because Cocoon can use a quicker
startup time and could stand to process requests even faster.

In low simultaneous thread counts such as 10 to 15 threads, there isn't
a lot of difference between ECM and Fortress in the time it takes to
lookup and release components.  However, under high load with upwards
of 100 threads or more, Fortress kills ECM in performance by a factor
of 16 (and in some cases even more).  The main reason is the gentler
saturation curve because of asynchronous releasing of components.

In a world where we did not need to release components explicitly, we
would see even better improvements.

The higher abstraction that I advocate actually helps organize thoughts
and enable the easier integration of optimization possibilities.


---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Mime
View raw message