openjpa-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ravi P Palacherla (JIRA)" <>
Subject [jira] Commented: (OPENJPA-1163) Data consistency issues while modifying collections.
Date Wed, 05 Aug 2009 16:49:14 GMT


Ravi P Palacherla commented on OPENJPA-1163:

>> If this issue is a defect then the default value should flip.

The reason for not switching the default value is because I am not completely confident that
the previous behavior is a defect. 
So without confirmation of whether it is a defect or not, I felt it is convenient for existing
applications using the current default
 value to upgrade to latest versions without the need for any additional properties. Also
there is a doc JIRA (OPENJPA-1223) opened for documenting this.

In my opinion the previous value is a defect.
May be I am wrong but I think it is a defect based on the following :

1)  The previous default behavior is as follows:
    When the number of modifications(add/remove) to a collection exceeds the initial size
of collection then the logic is to
    remove everything from collection and re-insert the objects. In the process of re-inserting
it will not consider data 
    that is modified as part of another transaction.
    So, the end result is that the data inserted into database is dependent on the number
of modifications made on the collection.
    I think it is bug as the data inserted into the database should be consistent irrespective
of the number of modifications 
    made to the collection.
2)  When the data modified in two concurrent transactions does not interfere with each other
then both the transactions should win.
    For example, let's consider table A which has 5 rows ( primary key of int 1-5) and has
row level locking. 
    Transaction 1 tries to modify rows 1 -3 and transaction 2 modifies 4-5 at the same time.
    In this case I think both the transactions has to win.
    The above is not possible with default value of autoOff, when the # of modifications to
A exceeds 5.

If you think adding an option to ProxyManager is a better fit than compatibility configuration
then I will modify my fix.
Can I please ask, if there is any additional advantage with it other than patch footprint.


> Data consistency issues while modifying collections.
> ----------------------------------------------------
>                 Key: OPENJPA-1163
>                 URL:
>             Project: OpenJPA
>          Issue Type: Bug
>          Components: kernel
>         Environment: openJPA trunk. Derby DB.
>            Reporter: Ravi P Palacherla
>            Assignee: Ravi P Palacherla
>         Attachments: OPENJPA-1163_trunk.patch
>   Original Estimate: 0h
>  Remaining Estimate: 0h
> There are data consistency issues when modifying more number of elements in a collection
Vs less number of elements.
> Following is a detailed explanation about the issue with example:
> - Entity A has a collection of Entities AItems with cascade ALL.
> - Test case :
>   Clear all the data inside tables representing Entity A and AItems.  
>   Create 3 entity managers em1,em2 and em3.
>   em1.begin()
>       create A on em1 with id "1"
>       add 10 elements of AItems (id's from 0-9) to the created A(id 1).
>       persist A.
>   em1.commit()
>   em1.begin()
>       merge A ( created in the previous step)
>       Remove 3 elements of AItems from the merged A.
>       Add 3 elements of AItems ( id's 10,11,12) to the merged A (id 1).
> With out committing em1
>   em2.begin()
>       query database to fetch A and construct object result2 of entity A.
>       Add 3 elements of AItems ( id's 13,14,15) to fetched A ( result2)      
>    em2.commit ()
>    em1.commit()
>   em3.begin()
>      query database to check the size of AItems that are related to A ( id 1)
>   em3.commit()
>   The result on em3's query for AItems related to A, returns 13 as expected.
>   13 ( Initial 10 - em1's 3 + em1's 3 + em2's 3).
> When the same test case is repeated with removing and adding 10 elements instead of 3
as before then I get wrong results.
>     Add initial 10 AItems (id's 0-9) for A.
>     commit()
>     em1 will remove 10 AItems from the collection of A.
>     em1 will add 10 AItems (id's 10-19) to collection of A.
>     em2 will add 10 AItems (id's 20-29) to collection of A.
>     Commit em2.
>     Commit em1.
>     Then instead of 20 elements ( Initial 10 - em1's 10 + em1's 10 + em2's 10), I see
only 10 elements.
>     The 10 elements that I see are from em1's added AItems ( id's 10-19).
> I think the cause of the issue is that, when more number of elements (compared to initial
element count of collection) in a collection are modified then collection tracking is disabled
and openJPA tries to do the following:
>  -- Delete every thing from the collection
>  -- Insert data back to collection.
> While Inserting the data back it does not consider adding the dirty records ( em2's 10
added elements ) because the collection tracking is disabled.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message