geronimo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "gianny DAMOUR" <gianny_dam...@hotmail.com>
Subject JCA - Connection Management final proposal
Date Thu, 02 Oct 2003 12:22:35 GMT
Hello,


Sorry for this long mail, yet I think that it is required to highlight the 
architecture of the proposal I have submitted for the Connection Management 
section of the JCA specifications, namely GERONIMO-97.

This architecture aims at breaking apart or partitioning the features to be 
covered in order to be JCA 1.5 compliant. It does so by definition the 
notion of partition. A partition is a functional unit containing or not a 
set of physical connections, also called a ManagedConnection or MC for 
short. A partition can also contain sub-partitions, which allows to further 
break-down a functional requirement.

The general idea is to route a creation/allocation request coming from a 
client to the relevant partition.

For instance, if an application component requests a local transacted 
connection, then this request will travel the following functional unit:

(1)	ConnectionFactory -> (2) ConnectionManager -> (3) MasterPartition -> (4) 
FactoryPartition -> (5) IdlePartition -> (6) LocalTXPartition.

(1)	is bound to JNDI during the deployment of a resource adapter;
(2)	is provided by us in order to hook-in our services;
(3)	is a kind of ConnectionManager. Because (2) must be a Serializable, (2) 
delegates without processing an allocation request.
(4)	is in charge of creating or re-using a MC;
(5)	does nothing by now and simply forwards the MC returned by (4) to (6);
(6)	is in charge of starting a LocalTransaction on the MC coming from (5), 
registering a local transacted specific ConnectionEventListener, defining a 
transaction scope if the connection is marked as shared by the calling 
application component et cetera.

By now, I have implemented a rather simple hierarchy of partitions which can 
deal to some extent with resource adapters having a NoTransaction 
transaction support.

This hierarchy is by-now hard-coded within the MCFPartition.postRegister 
method. However, if you have a look to the code you can see that such an 
initialization can be entirely configurable. In other words, a standard 
do-it-yourself approach is possible as long as our geronimo.ra.xml DD allows 
it.

Another feature built-in into the current implementation is the recycling of 
MC on a per partition basis. The implementation provides the infrastructure 
to free the MC contained by a specific partition and to recursively free the 
ones of its sub-partitions. Criteria to be used to free MC can be whatever 
we want. a MC is wrapped by a ManagedConnectionWrapper, which can be 
extended (the submitted patch define it as final and should not), in order 
to support additional properties as the number of times that the wrapped MC 
has been hit or the average hit per minute et cetera. By now, the 
implementation does not provide an helper to execute periodically such a 
collection, however it will be easy to add a ClockDaemon to address this 
requirement.

The proposed implementation also provides a mechanism in order to broadcast 
the availability (or the removal) of a MC within a given partition. A 
possible use can be to implement a factory partition, which monitors an idle 
and a destroy partition in order to decide how to manage an allocation 
request. However, one can use it for a lot of other purpose. For instance, 
it is trivial to write a Statistic Collector partition, which can be used to 
audit trail a resource adapter pool.

A minimalist deployer for the outbound-resourceadapter node of a ra.xml DD 
is also provided. It allows to bootstrap a ManagedConnectionFactory and 
register our ConnectionManager from an exploded resource adapter archive (I 
am still facing some issues with archived deployment) dropped in the deploy 
directory. I have also “hacked” the ENC setup in order to be able to provide 
an integration test of this proposal. More accurately, ConnectionFactory 
instances bootstrapped during the deployment are bound to the 
java:comp/env/jca/HackConnectionFactory JNDI name via our ReadOnlyContext. 
As our deployment scanner is single-threaded it is possible to retrieve the 
instances bound to this specific name during the deployment of an MBean for 
instance. I have submitted such an MBean, which can be dropped in the deploy 
directory and then operated from your preferred JMX console in order to see 
the implementation working in a running server. More accurately, this MBean 
stresses (I was really concerned about scalability issues as it is 
primordial for pools) the implementation and you can update the number of 
concurrent clients and the number of iterations per clients to be executed. 
By now, the pool has an hard-coded maximum size and a JMX exposed timeout 
attribute. This latter is the number of millisecond to wait for an available 
MC prior to raise an exception to the caller.

During the implementation of this proposal, I have used the JDBC connector 
provided by SUN and in order to track what was going on within their RI, I 
have implemented a couple a PrintWriter factories backed by various output 
streams. This is not a major feature, however, it could be re-used by our 
geronimo.ra.xml DD to configure a logger for a resource adapter.

Waiting for your feedback and review.

BTW, based on my understanding, it is possible to write JDBC and JMS 
connectors right now without having to wait for the Connection Management 
completion.

Gianny

_________________________________________________________________
MSN Messenger 6 http://g.msn.fr/FR1001/866  : dialoguez en son et en image 
avec vos amis.


Mime
View raw message