Return-Path: Delivered-To: apmail-incubator-geronimo-dev-archive@www.apache.org Received: (qmail 74198 invoked from network); 30 Sep 2003 04:36:38 -0000 Received: from daedalus.apache.org (HELO mail.apache.org) (208.185.179.12) by minotaur-2.apache.org with SMTP; 30 Sep 2003 04:36:38 -0000 Received: (qmail 15881 invoked by uid 500); 30 Sep 2003 04:36:08 -0000 Delivered-To: apmail-incubator-geronimo-dev-archive@incubator.apache.org Received: (qmail 15830 invoked by uid 500); 30 Sep 2003 04:36:08 -0000 Mailing-List: contact geronimo-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk list-help: list-unsubscribe: list-post: Reply-To: geronimo-dev@incubator.apache.org Delivered-To: mailing list geronimo-dev@incubator.apache.org Received: (qmail 15817 invoked from network); 30 Sep 2003 04:36:07 -0000 Received: from unknown (HELO public.coredevelopers.net) (209.233.18.245) by daedalus.apache.org with SMTP; 30 Sep 2003 04:36:07 -0000 Received: from snappydsl.net (node-423a7b82.ttd.onnet.us.uu.net [66.58.123.130]) (using TLSv1 with cipher DES-CBC3-SHA (168/168 bits)) (No client certificate requested) by public.coredevelopers.net (Postfix on SuSE Linux 8.0 (i386)) with ESMTP id 967F31D9BB for ; Mon, 29 Sep 2003 21:35:29 -0700 (PDT) Date: Mon, 29 Sep 2003 21:36:40 -0700 Subject: Re: JCA Connection Management proposal Content-Type: text/plain; charset=WINDOWS-1252; format=flowed Mime-Version: 1.0 (Apple Message framework v552) From: David Jencks To: geronimo-dev@incubator.apache.org Content-Transfer-Encoding: quoted-printable In-Reply-To: Message-Id: X-Mailer: Apple Mail (2.552) X-Spam-Rating: daedalus.apache.org 1.6.2 0/1000/N X-Spam-Rating: minotaur-2.apache.org 1.6.2 0/1000/N Sorry for the long delay in replying to this. And perhaps this should be on the wiki, but I'm writing here anyway. Based on my experience with the JBoss ConnectionManager=20 implementations, I'm convinced that due to the flexibility desired any=20= monolithic or inheritance based approach will quickly become=20 unmaintainable. I've started writing an interceptor based=20 ConnectionManager and hope it will be in a previewable state soon. The=20= idea is that each chunk of functionality, such as getting a mc from a=20 mcf, or enlisting a mc in a transaction, is in a separate interceptor. =20= By combining interceptors you get a fully functional ConnectionManager. I have a few comments interspersed below. On Saturday, September 20, 2003, at 06:36 PM, gianny DAMOUR wrote: > Hello, > > > I have submitted a =93preview=94 patch =96 GERONIMO-90 (BTW, it should = be=20 > great to add a connector module to JIRA) - aiming at addressing the=20 > =93Connection Management=94 section of the JCA specifications. I did = not=20 > have enough time to package it correctly =96 provide a *-service.xml=20= > file =96 however, the implementation has reached a stage which = requires=20 > your feedback. > > This implementation has been tested by using the JDBC connector early=20= > access RI, which can be downloaded here=20 > http://java.sun.com/products/jdbc/related.html, and the=20 > OracleConnectionPoolDataSource as the underlying=20 > ManagedConnectionFactory. > > The big picture of this implementation is: > > GeronimoConnectionManager: > > The ConnectionManager spi interface has been implemented and delegates=20= > the allocation of connection handles to a pool of ManagedConnection.=20= > By now, the ConnectionManager is really simple: it delegates directly=20= > to the pool. However, one needs to hook-in the Transaction and=20 > Security services in the allocateConnection method. AFAIK, it should=20= > be a =93simple=94 task: a ConnectionFactory MUST =96 required by the=20= > specifications =96 call allocateConnection in the same thread than the=20= > application component requesting this connection. In other words, two=20= > ThreadLocal (one related to our TM and one related to our Security=20 > Manager) should do the trick. The TransactionManager is required to keep track of this thread=20 association for us, so we don't need our own ThreadLocal. I'm less=20 familiar with the specs for security but I think any reasonable=20 Security Manager should also keep track of thread to security=20 domain/subject associations. > > Partition: > > The specifications do not define how connection pooling should be=20 > implemented. However, some non-prescriptive guidelines have been=20 > provided. One of them is to partition this pool. This is basically=20 > what I have decided to implement: The pool is partition on a=20 > per-ManagedConnectionFactory basis. By now, it is further partitioned=20= > on an idle, active, factory, destroy basis. The general idea of this=20= > design is to define distinct set of behavior depending on the kind of=20= > partition. > > Examples: > The factory partition is in charge of creating/allocating new=20 > connection handles. When its allocateConnection method is called, it=20= > decides if a new ManagedConnection should be created or if an existing=20= > one can be re-used. > The XA partition (to be implemented) is in charge of=20 > creating/allocating new transacted connection handles. When its=20 > allocateConnection is called, it enlists the ManagedConnection with=20 > our TM and then gets a connection handle from this enlisted=20 > ManagedConnection. > > PartitionEventSupport, PartitionEvent and PartitionListener: > > Inter-partition events can be propagated via an AWT like event model.=20= > This mechanism is used for example by the factory partition: It=20 > monitors the idle and destroy partitions in order to decide how to=20 > serve a new allocation request. More accurately, if a=20 > ManagedConnection is added to the idle partition, then a permit to try=20= > a matchManagedConnection is added. If a ManagedConnection is added to=20= > the destroy partition, then a permit to create a new ManagedConnection=20= > is added. > > > PartitionRecycler and PartitionRecycling: > > Partitions may be recycled. For instance, if a ManagedConnection seats=20= > idle too long time, then this ManagedConnection may be eligible for=20 > recycling (destroy in the case of idle ManagedConnection). I'm not sure I understand exactly what you are doing here, but I think=20= it's something I didn't implement all of in JBoss. I hope you will be=20= able to fit this into the interceptor based framework I am proposing. > > LoggerFactory: > > The inner workings of ManagedConnectionFactory and ManagedConnection=20= > can be tracked via a PrintWriter. LoggerFactory defines the contract=20= > to obtain a PrintWriter factory backed by various output streams. > > > Open issues: > GeronimoConnectionManager MUST be Serializable. I believe that this=20 > requirement is to support Serializable but not Referenceable=20 > ConnectionFactory. The current implementation is a rather big instance=20= > (extends AbstractContainer) and should not. Moreover the connection=20 > pool used by the implementation is declared as transient and should=20 > not (One needs to define a mechanism =96 I do not want a JMX lookup=20 > because this is definitively (?) not the right bus to push allocation=20= > requests - to get an handle on the pool w/o having to reference it). I think the only circumstance in which a ConnectionManager is=20 serialized is when a connection handle is serialized, perhaps when an=20 ejb instance is passivated. The solution I came up with in JBoss is to=20= have a serializable proxy that is just a handle. The first time it is=20= used it looks up the actual connection manager implementation (via=20 jmx): after that it can used the (transient) reference. > > A thorough code coverage/review MUST be done. The goal is to make sure=20= > that the implementation is thread-safe. The implementation has been=20 > stressed with 10 concurrent clients, which open and close 100 times a=20= > connection. During this stress, no concurrent modification exceptions=20= > has been raised (it always breaks when you do not want). > > The current implementation uses dumb synchronization. One should=20 > consider the concurrent API developed by Doug Lea. The stress test (20=20= > concurrent clients, 100 requests) has been executed in ~7500 ms on my=20= > box (P4 2GHz). However, it does not scale well based on the maximum=20 > number of ManagedConnection, which is a pity for a pool. I have=20 > identified the issue: when idle connections are available, the=20 > matchManagedConnection is invoked under synchronization in order to=20 > reserve all the ManagedConnection passed to this method. This is one of my big complaints about the connector spec. What I did=20= was to partition the pool based on configurable criteria (just one=20 pool, by Subject, by ConnectionRequestInfo, or by Subject and=20 ConnectionRequestInfo) and supply exactly one match choice to=20 matchManagedConnection. I think this is a reasonable default strategy:=20= in nearly 2 years in JBoss, only one person had a connector for which=20 this was not appropriate. However, we should also have the "dumb"=20 strategy you have implemented. I've wondered if there is some middle=20 ground, but haven't thought of it yet. > > Could you please have a look to this implementation and let me know if=20= > you are happy to progress based on this code-base. During this time, I=20= > put it on hold. I'm afraid I haven't actually had a chance to look at your code yet. =20 I'm trying to implement some of the parts that appear to be missing=20 from your code in my interceptor framework (transaction stuff, mostly) =20= and I hope we can work together to get everything in. Many thanks, /********************************** * David Jencks * Partner * Core Developers Network * http://www.coredevelopers.net **********************************/ > > Gianny > > _________________________________________________________________ > MSN Search, le moteur de recherche qui pense comme vous ! =20 > http://search.msn.fr/ > > /********************************** * David Jencks * Partner * Core Developers Network * http://www.coredevelopers.net **********************************/