jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From defeng <defeng...@gmail.com>
Subject Re: Replace cluster-wide lock to itemstate-wide lock
Date Thu, 19 Feb 2009 18:31:10 GMT

Got your point. Thank.

It brings a new problem to me: How to get the real *full* path from an
itemstate (ID)?
I only see a CachingHirarchyManager in SessionISM. Is there any global
ID/Path map so I can retrieve the path from ID in SharedISM? Or is there any
alternative to solve it?

Regards,


Dominique Pfister wrote:
> 
> Hi,
> 
> On Thu, Feb 19, 2009 at 4:25 PM, defeng <defeng.lu@gmail.com> wrote:
>>
>> Dominique,
>>
>> Thank for your reply. Seems to me, no inconsitence in your sample.
>> 1. CN1 (/a)
>> 2. CN1 (/a/b)
>> 3. CN2 (/a/b)
>> 4. CN1 (/a/b/c)
>>
>> For Step 3: before CN2 update /a/b, it has to wait for CN1 finish the
>> step2,
>> since /a/b is globally locked by CN1. After CN2 acquires the lock,
>> following
>> current logic, CN2 will call sync( ) method to get CN1's /a/b, then merge
>> them, then persist /a/b.
>>
>> For Step 4: before update, CN1 will do sync( ) to get CN2's /a/b, then
>> merge. So /a/b/c is still consistent.
> 
> I see, so you'd sync() before persisting the next state. This looks
> problematic to me: first, this might have quite an impact on
> performance. Second, you'd probably run into reentrancy problems,
> since the SharedISM is built to receive modified states once at the
> start of its update operation, only.
> 
>>
>> Per your suggestion, globally lock on one itemstate is not safe. Instead
>> of
>> , we should lock on a kind of path prefix (pattern). For instance, if a
>> CN
>> locks on "/a/b", then "/a/c" can be updated by other CN concurrently.
> 
> Exactly. If you have a situation where updates are seldomly
> overlapping hierarchically (or can still design your application this
> way), you should be able to make concurrent updates.
> 
> Kind regards
> Dominique
> 
>>
>> Please correct me if any misunderstanding. Thanks again.
>>
>>
>>
>> Dominique Pfister wrote:
>>>
>>> Hi,
>>>
>>> this approach could lead to corrupted item states: before updating
>>> them, SharedISM verifies that its update operation is consistent. If
>>> this update is divided into individual operations, you easily
>>> overwrite an item that has been modified after your update operation
>>> started, e.g.
>>>
>>> Cluster Node 1: saves /a, /a/b and /a/b/c
>>> Cluster Node 2: saves /a/b
>>>
>>> It the operations in this example are interlaced like this:
>>>
>>> CN1 (/a)
>>> CN1 (/a/b)
>>> CN2 (/a/b)
>>> CN1 (/a/b/c)
>>>
>>> CN1 will have an inconsistent state at the end of its update operation.
>>>
>>> If you're concerned about making update operations concurrent in a
>>> cluster, I'd suggest to do the following: determine the parent of your
>>> update operation (in the example above, this would be /a for CN1, /a/b
>>> for CN2). If the parents don't have a path prefix in common and no
>>> node references are affected, the two update operations should not
>>> lead to inconsistent item states, whichever way they're interlaced.
>>>
>>> Cheers
>>> Dominique
>>>
>>> On Wed, Feb 18, 2009 at 6:04 PM, defeng <defeng.lu@gmail.com> wrote:
>>>>
>>>> Currently when I update an itemstate, I need to acquire a cluster lock
>>>> (Journal.doLock()). This lock will block any update on others
>>>> itemstates.
>>>> I
>>>> want to only lock *one* itemstate in the cluster. So I want to modify
>>>> the
>>>> SharedISM.update( ). (I donot use XA). Is there any side-effect?
>>>>
>>>>    public void update(ChangeLog local, EventStateCollectionFactory
>>>> factory)
>>>>                        throws ReferentialIntegrityException,
>>>> StaleItemStateException,
>>>>                        ItemStateException {
>>>>
>>>>                // beginUpdate(oneItemLog, factory, null).end();
>>>>
>>>>                Iterator deletedStates = local.deletedStates();
>>>>                while (deletedStates.hasNext()) {
>>>>                        ItemState state = (ItemState)
>>>> deletedStates.next();
>>>>                        updateOneItemState(state, factory);
>>>>                }
>>>>                Iterator modifiedStates = local.modifiedStates();
>>>>                while (modifiedStates.hasNext()) {
>>>>                        ItemState state = (ItemState)
>>>> modifiedStates.next();
>>>>                        updateOneItemState(state, factory);
>>>>
>>>>                }
>>>>                Iterator addedStates = local.addedStates();
>>>>                while (addedStates.hasNext()) {
>>>>                        ItemState state = (ItemState)
>>>> addedStates.next();
>>>>                        updateOneItemState(state, factory);
>>>>                }
>>>>        }
>>>>
>>>>    private void updateOneItemState(ItemState state,
>>>> EventStateCollectionFactory factory)
>>>>        throws ReferentialIntegrityException, StaleItemStateException,
>>>> ItemStateException{
>>>>        ChangeLog oneItemLog  = new ChangeLog();
>>>>        oneItemLog.added(state);
>>>>         try {
>>>>           doLock(state);//Only lock this state in the cluster
>>>>           beginUpdate(oneItemLog, factory, null).end();
>>>>         }finally{
>>>>           unLock(state);
>>>>         }
>>>>    }
>>>> --
>>>> View this message in context:
>>>> http://www.nabble.com/Replace-cluster-wide-lock-to-itemstate-wide-lock-tp22083258p22083258.html
>>>> Sent from the Jackrabbit - Dev mailing list archive at Nabble.com.
>>>>
>>>>
>>>
>>>
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Replace-cluster-wide-lock-to-itemstate-wide-lock-tp22083258p22102882.html
>> Sent from the Jackrabbit - Dev mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: http://www.nabble.com/Replace-cluster-wide-lock-to-itemstate-wide-lock-tp22083258p22106833.html
Sent from the Jackrabbit - Dev mailing list archive at Nabble.com.


Mime
View raw message