geronimo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jules Gosnell <>
Subject Re: [wadi-dev] Re: Session Policy was: heads up: initial contribution of a client API to session state management for OpenEJB, ServiceMix, Lingo and Tuscany
Date Thu, 16 Mar 2006 22:57:16 GMT
Filip Hanik - Dev Lists wrote:

> Hey Jules,
> thanks for commenting, I will pop in on codehaus devlists.
> The lazy replicated map supports more than one backup node, with a 
> very small tweak in just one method, you can change it to be N number 
> of backup nodes. N being configurable, just a matter of getting the 
> conf param down to the impl level.
nice :-) - we are just getting the first cut of our in-vm replication 
service in place - it is similarly parameterisable, with a pluggable 
strategy for selecting suitable replication partners...

> Apache Tribes, as I like to nickname the Tomcat group communication 
> protocol, has an implementation at
> including the LazyReplicatedMap and a MapDemo (you're gonna be awed by 
> my Swing skills).

Hmmm.. - I shall need to take the time to look at this.

> I am also in the place of implementing a regular ReplicatedMap, to use 
> for context attribute replication, a feature sought after.

Yes - I have shied away from supporting distributed context attributes, 
since they are not specced - but, you never know :-)

> I will subscribe to the WADI list and we can continue over there re: 
> session management.

I look forward to seeing you over there,


> Filip
> Jules Gosnell wrote:
>> Filip Hanik - Dev Lists wrote:
>>> gentlemen, not a committer here, but wanted to share some thoughts.
>>> in my opinion, the Session API should not have to know about 
>>> clustering or session replication, nor should it need to worry about 
>>> location.
>>> the clustering API should take care of all of that.
>> We are 100% in agreement here, Filip.
>>> the solution that we plan to implement for Tomcat is fairly straight 
>>> forward. Let me see if I can give an idea of how the API shouldn't 
>>> need to worry, its a little lengthy, but it shows that the Session 
>>> and the SessionManager need to know zero about clustering or session 
>>> locations. (this is only one solution, and other solutions should 
>>> demonstrate the same point, SessionAPI need to know nothing about 
>>> clustering or session locations)
>>> 1. Requirements to be implemented by the API
>>>   bool isDirty - (has the session changed in this request)
>>>   bool isDiffable - is the session able provide a diff
>>>   byte[] getSessionData() - returns the whole session
>>>   byte[] getSessionDiff() - optional, see isDiffable, resets the 
>>> diff data
>>>   void setSessionDiff(byte[] diff) - optional, see isDiffable, apply 
>>> changes from another node
>> So, delta-ed sessions, at whole session or attribute granularity ? 
>> and when will you be sending the deltas - immediately, end of 
>> request[-group], pluggable strategies ?
>>> 2. Requirements to be implemented by the API
>>>   void setSessionMap(HashMap map) - makes the map implementation 
>>> pluggable
>>> 3. And the key to this, is that we will have an implementation of a 
>>> LazyReplicatedHashMap
>>>   The key object in this map is the session Id.
>>>   The map entry object is an object that looks like this
>>>   ReplicatedEntry {
>>>      string id;//sessionid
>>>      bool isPrimary; //does this node hold the data
>>>      bool isBackup; //does this node hold backup data
>>>      Session session; //not null values for primary and backup nodes
>>>      Member primary; //information about the primary node
>>>      Member backup; //information about the backup node
>>>   }
>>>   The LazyReplicatedHashMap overrides get(key) and put(id,session)
>> interesting...
>>> So all the nodes will have the a sessionId,ReplicatedEntry 
>>> combinations in their session map. But only two 
>> two is a fixed number or deploy-time parameter ?
>>> nodes will have the actual data.
>>> This solution is for sticky LB only, but when failover happens, the 
>>> LB can pick any node as each node knows where to get the data.
>>> The newly selected node, will keep the backup node or select a new 
>>> one, and do a publish to the entire cluster of the locations.
>>> As you can see, all-to-all communications only happens when a 
>>> Session is (created|destroyed|failover). Other than that it is 
>>> primary-to-backup communication only, and this can be in terms of 
>>> diffs or entire sessions using the isDirty or getDiff. This is 
>>> triggered either by an interceptor at the end of each request or by 
>>> a batch process for less network jitter but less accuracy (but 
>>> adequate) for fail over.
>> I see - that answers my question about when replication occurs :-)
>>> As you can see, access time is not relevant here, nor does the 
>>> Session API even know about clustering.
>> yes !
>>> In tomcat we have separated out group communication into a separate 
>>> module, we are implementing the LazyReplicatedHashMap right now just 
>>> for this purpose.
>>> positive thoughts, criticism and bashing are all welcome :)
>> This approach has much more in common with WADI's - in fact there is 
>> lot of synergy here. I think the WADI and TC clustering teams could 
>> learn a lot from each other. I would be very interested in sitting 
>> down with you Filip and having a long chat about session management. 
>> Do you have a Tomcat clustering-specific list that I could jump onto 
>> ? You might be interested in popping in on  and 
>> learning a little more about WADI ?
>> regards,
>> Jules
>>> Filip

"Open Source is a self-assembling organism. You dangle a piece of
string into a super-saturated solution and a whole operating-system
crystallises out around it."

 * Jules Gosnell
 * Partner
 * Core Developers Network (Europe)
 * Open Source Training & Support.

View raw message