geronimo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dain Sundstrom <d...@iq80.com>
Subject Re: gcache imlementation ideas[long]
Date Wed, 13 Sep 2006 21:19:23 GMT
That is some sweet ascii art.  What did you use?

This seems like a good KISS design.  How long until you think we can  
start to use it?

-dain

On Sep 12, 2006, at 9:19 AM, Jeff Genender wrote:

> I wanted to go over a high level design on a gcache cache component  
> and
> get some feedback, input and invite folks who are interested to  
> join in.
> ..so here it goes...
>
> The gcache will be one of several cache/clustering offerings...but
> starting off with the first one...
>
> The first pass I want to go with the master/slave full replication
> implementation.  What this means is a centralized caching server which
> runs a cache implementation (likely will use ehcache underneath), and
> this server is known as a master.  My interest in ehcache is it  
> provides
> the ability to persist session state from a configuration if full
> failure recovery is needed (no need to reinvent the wheel on a great
> cache).  The master will communicate with N number of slave servers,
> also running a gcache implementation.
>
>    +--------+   +---------+  +---------+
>    |        |   |         |  |         |
>    | MASTER |   | SLAVE 1 |  | SLAVE 2 | ... n-slaves
>    |        |   |         |  |         |
>    +--------+   +---------+  +---------+
>       |   |            |           |
>       |   |            |           |
>       |   |____________|           |
>       |                            |
>       |____________________________|
>
>
>
> We then have client component(s) that "plugs in" and communicates with
> the server.  The configuration for the client should be very light  
> where
> it will only really be concerned with the master/slave/slave/nth- 
> slave.
>  In other words, it communicates only with the master.  The master is
> responsible for "pushing" anything it receives to its slaves and other
> nodes in the cluster.  The slaves basically look like clients to  
> the master.
>
>    +--------+   +---------+  +---------+
>    |        |   |         |  |         |
>    | MASTER |---| SLAVE 1 |  | SLAVE 2 |
>    |        |   |         |  |         |
>    +--------+   +---------+  +---------+
>        |  |                       |
>        |  +-----------------------+
>        |
>    ,-------.
>   ( CLIENT  )
>    `-------'
>
> In the event the master goes down, the client notes the timeout and  
> then
> automatically communicates with slave #1 as the new master.  Since  
> slave
> #1 is also a client of the MASTER, it can determine either by  
> itself, or
> by the first request that comes in asking for data, that it is the new
> master.
>
>    +--------+   +---------+  +---------+
>    |  OLD   |   |NEW MSTER|  |         |
>    | MASTER |   |   WAS   |--| SLAVE 2 |
>    |        |   | SLAVE 1 |  |         |
>    +--------+   +---------+  +---------+
>        |           _,'
>        X         ,'
>        |      ,-'
>    ,-------.<'
>   ( CLIENT  )
>    `-------'
>
> I think this is a fairly simple implementation, yet fairly robust.
> Since we are not doing the heart beat and mcast, we cut down on a  
> lot of
> network traffic.
>
> Communication will be done by TCPIP sockets and would probably like to
> use NIO.
>
> I would like to see this component be able to run on its own...i.e. no
> Geronimo needed.  We can build a Geronimo gbean and deployer around  
> it,
> but I would like to see this component usable in many other areas,
> including outside of Geronimo.  Open source needs more "free"  
> clustering
> implementations.  I would like this component to be broken down into 2
> major categories...server and client.
>
> After a successful implementation of master/slave, I would like to  
> make
> pluggable strategies, so we can provide for more of a distributed  
> cache,
> partitioning, and other types of joins, such as mcast/heart beat for
> those who want it.
>
> Thoughts and additional ideas?
>
> Thanks,
>
> Jeff


Mime
View raw message