jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jukka Zitting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (JCR-1552) Concurrent conflicting property creation sometimes doesn't fail
Date Thu, 24 Apr 2008 11:47:26 GMT

    [ https://issues.apache.org/jira/browse/JCR-1552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12591991#action_12591991

Jukka Zitting commented on JCR-1552:

> That being said: your interpretation makes this feature less useful to clients (IMHO).
> From the client's point of view, it should be irrelevant *how* an overlapping update
> When the client gets a property value, modifies it, and can write it back although the
> property has changed, then that *is* an overlapping update that wasn't catched.

That's again assuming that the "get property" operation is included in the control flow. We
basically have two separate issues here:

1) The getProperty(), setProperty(), save(), case. This is equivalent to a database client
doing a SELECT followed by an UPDATE on the same row. A database that supports isolation levels
REPEATABLE READ or SERIALIZED will guarantee that if the transaction succeeds no other transaction
can have updated the row between the SELECT and UPDATE statements. Jackrabbit has never supported
such isolation levels and thus the lack of this isn't a regression. We can discuss implementing
higher isolation levels as a new feature request, but note that the feature a) has a high
design and runtime cost, b) is not needed by many (most?) clients, and c) there's already
a standard solution (JCR locks) for clients that do need the functionality. In any case this
is IMHO outside the scope of this issue.

2) The setProperty(), save() case. This is equivalent to a database client doing a prepareStatement
followed by executeUpdate on an UPDATE statement. I still don't see how or why such a client
could care about concurrent updates (except if the parent node gets removed), and thus the
fact that we no longer throw exceptions for some such cases is IMHO rather an improvement
than a regression. Based on this reasoning I propose that we resolve this issue as Won't Fix
and perhaps create a new improvement issue to get rid of the remaining InvalidItemStateExceptions
from concurrent property updates.

> Concurrent conflicting property creation sometimes doesn't fail
> ---------------------------------------------------------------
>                 Key: JCR-1552
>                 URL: https://issues.apache.org/jira/browse/JCR-1552
>             Project: Jackrabbit
>          Issue Type: Bug
>          Components: jackrabbit-core
>    Affects Versions: core 1.4.2
>            Reporter: Thomas Mueller
>            Assignee: Stefan Guggisberg
>             Fix For: 1.5
> The following test prints "Success":
>        Session s1 = ...
>        Session s2 = ...
>        s1.getRootNode().setProperty("b", "0"); // init with zero
>        s1.getRootNode().setProperty("b", (String) null); // delete
>        s1.save();
>        s1.getRootNode().setProperty("b", "1");
>        s2.getRootNode().setProperty("b", "2");
>        s1.save();
>        s2.save();
>        System.out.println("Success");
> However  if the line marked "... // delete" is commented out, 
> it fails with the following exception:
> javax.jcr.InvalidItemStateException:
> cafebabe-cafe-babe-cafe-babecafebabe/{}b: the item cannot be saved
> because it has been modified externally.
>        at org.apache.jackrabbit.core.ItemImpl.getTransientStates(ItemImpl.java:246)
>        at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:928)
>        at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:849)
> It should fail in all cases. If we decide it shouldn't fail, it needs to be documented.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message