jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Giota Karadimitriou" <Giota.Karadimitr...@eurodyn.com>
Subject RE: jackrabbit & clustering
Date Wed, 10 May 2006 16:17:43 GMT
I think I overcomplicated things in my previous email.
The real question is in fact the following and it is a general question
(not related to clustering):
In order for a shism to notify a transient/local state that the
persistent state this object is representing has been updated is it
sufficient to find the state (using getItemState(ItemId)) and call
state.notifyStateUpdated()?) Does this take care of the rest
(propagating the event to the other layers)?

regards
Giota

> -----Original Message-----
> From: Marcel Reutegger [mailto:marcel.reutegger@gmx.net]
> Sent: Tuesday, May 09, 2006 11:03 AM
> To: dev@jackrabbit.apache.org
> Subject: Re: jackrabbit & clustering
> 
> Giota Karadimitriou wrote:
> > Hi Marcel and the rest of the list,
> >
> > please bear with me once more. I would like to ask if the following
> > scenario makes sense before applying it in practice.
> >
> > Let's assume that I have 2 clustered nodes and that I am able to
access
> > the shareditemstatemanagers on both (possibly I will make some
wrapper
> > around shareditemstatemanager and use rmi or sth to accomplish this
but
> > this part comes of second importance now).
> >
> > I will name them shism1 and shism2 where shism1=shared item state
> > manager on cluster node 1 and shism2=shared item state manager on
> > cluster node 2
> > (shismn==shared item state manager on cluster node n).
> >
> > a) First problem is making the write lock distributed. I though that
> > maybe this could be accomplished by doing the following:
> >
> > When invoking shism1.acquireWriteLock override it in order to also
> > invoke
> > shism2.acquireWriteLock  ... shismn.acquireWriteLock
> >
> > This way the write lock will have been acquired on all
> > shareditemstatemanagers.
> >
> > b) Once the update operation is finished and the changes are
persisted
> > in permanent storage perform the rest 2 operations:
> >
> > 1. shism1.releaseWriteLock (I will create such a method)
> > which will perform
> >
> > // downgrade to read lock
> >                 acquireReadLock();
> >                 rwLock.writeLock().release();
> >                 holdingWriteLock = false;
> >
> > and which be invoked on all the shared item state managers 2,3...n
> > shism2.releaseWriteLock  ... shismn.releaseWriteLock
> >
> > Before releasing the write lock I will also perform
> >
> > shism2.cache.evict(...),  shismn.cache.evict(...)
> >
> > where (...) will be all the itemstate ids that existed in
shism1.shared.
> >
> > This way all the item states persisted in cluster node 1 will be
evicted
> > from the cache of the other nodes thus forcing them to take them
from
> > the persistent storage once more on the next first read or write
> > operation.
> >
> > Does this make sense you think?
> 
> I see a couple of issues with your approach:
> 
> simply acquiring the write locks on other cluster nodes in a random
> sequential order
> may lead to a deadlock situation, unless the cluster defines a strict
> order which is
> known to all cluster nodes and locks are always acquired in that
order.
> 
> I'm not sure if evicting the states from the caches will do it's job.
> there might be
> local (and transient) states that are connected to the shared states.
> simply removing
> them from the cache will not work in that case.
> 
> finally, what happens if the a cluster node crashes while holding
'remote'
> write
> locks on other nodes? will they be released?
> 
> regards
>   marcel


Mime
View raw message