geronimo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Filip Hanik - Dev Lists <devli...@hanik.com>
Subject Re: Jetty 6- Clustering - How it works
Date Thu, 11 Jan 2007 03:26:13 GMT
really awesome work Gianny, I know you've worked very hard on it.

Filip

David Jencks wrote:
> Wow this is great!!
>
> Are there instructions somewhere on how to set up a demo system?  
> Having an integration test would be even better :-)
>
> thanks
> david jencks
>
> On Jan 6, 2007, at 7:30 PM, Gianny Damour wrote:
>
>> Hi,
>>
>> I think that support for clustered Web-applications with Jetty is now 
>> working.
>>
>>
>> Here is a description of how this works; note that most of the 
>> described behavior is WADI specific.
>>
>> Group Communication
>> Group communications are performed by Tribes, which is the Tomcat 6 
>> group communication engine. I know very little of Tribes; however, I 
>> am pretty sure that Filip Hanik can give us an in-depth description, 
>> if requested. At a very high level, Tribes provides membership 
>> discovery and failure detection. It also provides basic message 
>> exchange communication primitives, that WADI builds upon to provide 
>> additional message exchange operations (e.g. request-reply).
>>
>> Logical group communication engines are layered on top of the above 
>> (physical) group communication engine. A logical group communication 
>> engine, a ServiceSpace in the WADI's terminology, provides the same 
>> features than a physical group communication engine and allows the 
>> definition of sub-groups. This means that at the physical level, you 
>> could have three nodes interconnected and at the logical level only 
>> two of them could appear as existing.
>>
>>
>> Clustered Web-Application Discovery
>> A clustered Web-application is placed into a logical group 
>> communication engine, which is uniquely identified by a URI. This URI 
>> is the Artifact id of the Web-application configuration. When this 
>> Web-application starts, the logical group communication engine starts 
>> and joins the logical sub-group identified by its unique id. 
>> Conversely, when the application stops, the logical group 
>> communication engine leaves the logical sub-group.
>>
>>
>> Partitioned State
>> WADI implements a partitioned session state topology. In a cluster 
>> all the session states are distributed across the cluster nodes and 
>> only one node holds the state of a given session. This design choice 
>> was made to improve scalability with respect to the size of data to 
>> be managed by a single node. Session locations, information required 
>> when a node requests a session that it does not hold, are also 
>> managed by a single node. When a node fails, the session states and 
>> session locations managed by this node are lost and WADI is able to 
>> recreate them. Session states are lazily recreated from replicas. 
>> Session locations are recreated by querying the cluster and asking 
>> each member what are the sessions that they are currently holding.
>>
>>
>> Session Creation
>> When an inbound request wants to create a new HttpSession, a WADI 
>> session is created. This session is hosted by the node receiving the 
>> inbound request. An HttpSession using under the cover the WADI 
>> session is then created and returned. Under the cover, WADI ensures 
>> that the session has a unique identifier cluster-wide.
>>
>>
>> Session Migration
>> When an inbound request wants to access an HttpSession and this 
>> session is not hosted by the node, The node hosting the requested 
>> session migrates it to the node receiving the request. Under the 
>> cover, WADI ensures correct handling of concurrent session migration 
>> requests via a locking approach and maintains its internal book 
>> keeping of session locations following migration events.
>>
>>
>> Session Replication
>> When a request complete, the WADI session used under the cover of the 
>> HttpSession is notified. The WADI session is then replicated 
>> synchronously to one or multiple nodes. The selection of the back-up 
>> nodes for each session is customizable via a plugin strategy.
>>
>> When a node fails, replicas are re-organized based on the new list of 
>> existing members.
>>
>>
>> Fail-Over
>> When an inbound request wants to access an HttpSession, which was 
>> hosted by a node which has died, the cluster is queried for a replica 
>> of the requested session and an HttpSession is recreated if possible.
>>
>>
>> Session Evacuation
>> When a Web-application stops, all the session that it holds are 
>> evacuated to the other nodes hosting the same clustered Web-application.
>>
>>
>> It will take me 1 to 2 weeks to test specific error scenarios and 
>> ensure correctness; meanwhile, if anyone wants to know more of some 
>> specific areas, then please feel free to ask.
>>
>> Thanks,
>> Gianny
>
>
>
> --No virus found in this incoming message.
> Checked by AVG Free Edition.
> Version: 7.5.432 / Virus Database: 268.16.6/617 - Release Date: 
> 1/5/2007 11:11 AM
>
>


Mime
View raw message