geronimo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David Jencks <>
Subject Re: JCA Connection Management proposal
Date Tue, 30 Sep 2003 04:36:40 GMT
Sorry for the long delay in replying to this.

And perhaps this should be on the wiki, but I'm writing here anyway.

Based on my experience with the JBoss ConnectionManager 
implementations, I'm convinced that due to the flexibility desired any 
monolithic or inheritance based approach will quickly become 
unmaintainable.  I've started writing an interceptor based 
ConnectionManager and hope it will be in a previewable state soon.  The 
idea is that each chunk of functionality, such as getting a mc from a 
mcf, or enlisting a mc in a transaction, is in a separate interceptor.  
By combining interceptors you get a fully functional ConnectionManager.

I have a few comments interspersed below.

On Saturday, September 20, 2003, at 06:36 PM, gianny DAMOUR wrote:

> Hello,
> I have submitted a “preview” patch – GERONIMO-90 (BTW, it should be 
> great to add a connector module to JIRA) - aiming at addressing the 
> “Connection Management” section of the JCA specifications. I did not 
> have enough time to package it correctly – provide a *-service.xml 
> file – however, the implementation has reached a stage which requires 
> your feedback.
> This implementation has been tested by using the JDBC connector early 
> access RI, which can be downloaded here 
>, and the 
> OracleConnectionPoolDataSource as the underlying 
> ManagedConnectionFactory.
> The big picture of this implementation is:
> GeronimoConnectionManager:
> The ConnectionManager spi interface has been implemented and delegates 
> the allocation of connection handles to a pool of ManagedConnection. 
> By now, the ConnectionManager is really simple: it delegates directly 
> to the pool. However, one needs to hook-in the Transaction and 
> Security services in the allocateConnection method. AFAIK, it should 
> be a “simple” task: a ConnectionFactory MUST – required by the 
> specifications – call allocateConnection in the same thread than the 
> application component requesting this connection. In other words, two 
> ThreadLocal (one related to our TM and one related to our Security 
> Manager) should do the trick.

The TransactionManager is required to keep track of this thread 
association for us, so we don't need our own ThreadLocal.  I'm less 
familiar with the specs for security but I think any reasonable 
Security Manager should also keep track of thread to security 
domain/subject associations.
> Partition:
> The specifications do not define how connection pooling should be 
> implemented. However, some non-prescriptive guidelines have been 
> provided. One of them is to partition this pool. This is basically 
> what I have decided to implement: The pool is partition on a 
> per-ManagedConnectionFactory basis. By now, it is further partitioned 
> on an idle, active, factory, destroy basis. The general idea of this 
> design is to define distinct set of behavior depending on the kind of 
> partition.
> Examples:
> The factory partition is in charge of creating/allocating new 
> connection handles. When its allocateConnection method is called, it 
> decides if a new ManagedConnection should be created or if an existing 
> one can be re-used.
> The XA partition (to be implemented) is in charge of 
> creating/allocating new transacted connection handles. When its 
> allocateConnection is called, it enlists the ManagedConnection with 
> our TM and then gets a connection handle from this enlisted 
> ManagedConnection.
> PartitionEventSupport, PartitionEvent and PartitionListener:
> Inter-partition events can be propagated via an AWT like event model. 
> This mechanism is used for example by the factory partition: It 
> monitors the idle and destroy partitions in order to decide how to 
> serve a new allocation request. More accurately, if a 
> ManagedConnection is added to the idle partition, then a permit to try 
> a matchManagedConnection is added. If a ManagedConnection is added to 
> the destroy partition, then a permit to create a new ManagedConnection 
> is added.
> PartitionRecycler and PartitionRecycling:
> Partitions may be recycled. For instance, if a ManagedConnection seats 
> idle too long time, then this ManagedConnection may be eligible for 
> recycling (destroy in the case of idle ManagedConnection).

I'm not sure I understand exactly what you are doing here, but I think 
it's something I didn't implement all of in JBoss.  I hope you will be 
able to fit this into the interceptor based framework I am proposing.
> LoggerFactory:
> The inner workings of ManagedConnectionFactory and ManagedConnection 
> can be tracked via a PrintWriter. LoggerFactory defines the contract 
> to obtain a PrintWriter factory backed by various output streams.
> Open issues:
> GeronimoConnectionManager MUST be Serializable. I believe that this 
> requirement is to support Serializable but not Referenceable 
> ConnectionFactory. The current implementation is a rather big instance 
> (extends AbstractContainer) and should not. Moreover the connection 
> pool used by the implementation is declared as transient and should 
> not (One needs to define a mechanism – I do not want a JMX lookup 
> because this is definitively (?) not the right bus to push allocation 
> requests - to get an handle on the pool w/o having to reference it).

I think the only circumstance in which a ConnectionManager is 
serialized is when a connection handle is serialized, perhaps when an 
ejb instance is passivated.  The solution I came up with in JBoss is to 
have a serializable proxy that is just a handle.  The first time it is 
used it looks up the actual connection manager implementation (via 
jmx): after that it can used the (transient) reference.
> A thorough code coverage/review MUST be done. The goal is to make sure 
> that the implementation is thread-safe. The implementation has been 
> stressed with 10 concurrent clients, which open and close 100 times a 
> connection. During this stress, no concurrent modification exceptions 
> has been raised (it always breaks when you do not want).
> The current implementation uses dumb synchronization. One should 
> consider the concurrent API developed by Doug Lea. The stress test (20 
> concurrent clients, 100 requests) has been executed in ~7500 ms on my 
> box (P4 2GHz). However, it does not scale well based on the maximum 
> number of ManagedConnection, which is a pity for a pool. I have 
> identified the issue: when idle connections are available, the 
> matchManagedConnection is invoked under synchronization in order to 
> reserve all the ManagedConnection passed to this method.

This is one of my big complaints about the connector spec.  What I did 
was to partition the pool based on configurable criteria (just one 
pool, by Subject, by ConnectionRequestInfo, or by Subject and 
ConnectionRequestInfo) and supply exactly one match choice to 
matchManagedConnection.  I think this is a reasonable default strategy: 
in nearly 2 years in JBoss, only one person had a connector for which 
this was not appropriate.  However, we should also have the "dumb" 
strategy you have implemented.  I've wondered if there is some middle 
ground, but haven't thought of it yet.
> Could you please have a look to this implementation and let me know if 
> you are happy to progress based on this code-base. During this time, I 
> put it on hold.

I'm afraid I haven't actually had a chance to look at your code yet.  
I'm trying to implement some of the parts that appear to be missing 
from your code in my interceptor framework (transaction stuff, mostly)  
and I hope we can work together to get everything in.

Many thanks,
* David Jencks
* Partner
* Core Developers Network

> Gianny
> _________________________________________________________________
> MSN Search, le moteur de recherche qui pense comme vous !  
* David Jencks
* Partner
* Core Developers Network

View raw message