ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kotamrajuyashasvi <kotamrajuyasha...@gmail.com>
Subject Re: work around for problem where ignite query does not include objects added into Cache from within a transaction
Date Tue, 19 Sep 2017 07:49:34 GMT

Thanks for your responses. I just wanted a temporary work around till the
actual feature is implemented, even at the cost of performance. I have
thought of another approach to this problem. 

I plan to use my own transaction mechanism instead of Ignite transactions. I
can use explicit locks to lock the keys and my own commit and rollback
functionality.In Primary Key Pojo class along with pk fields I can add a
field 'transaction_id' which has the value of the transaction id(some unique
id) in which the row has been inserted with in the transaction. By default
it is null. I do not perform update or delete or insert directly using
queries. For update and delete first I perform a select query to get the
keys which should be deleted/updated.Then I lock the keys using explicit
locks and also I check whether if I had to wait to acquire lock or I got it
directly. If I had to wait to acquire lock then I rerun select query as
there are chances that the keys that are not locked yet might not be
eligible for the select query or might change their values. Before locking
the keys I make a few checks. 
(a)If the key's transaction_id is not null and is not same as the present
transaction id then i discard the key and continue with the remaining
keys(as it belongs to some other transaction). 
(b)If the key is already present with in the transaction(i.e same key fields
except the transaction_id) I discard it because that row has been updated
with in transaction and hence I need to use the updated value of pk if it
was eligible for query result. 
(c)I maintain a HashMap<PK,Value> of oldcacherows that I just locked. also
HashMap<PK,Value> of new rows that are updated or inserted with
transaction_id with value of current transaction id.I also maintain a list
of deleteList which are to deleted with in the transaction. when ever a
delete request comes I do not delete it directly, I first push it into the
delete HashSet and during commit I delete them. during select if key is
present  in the delete HashSet I discard the key. If the key is present in
oldcacherows or new cacherows HashMap I do not acquire locks. 

Most of these checks after select query take constant amount time (O(1))
since I use HashMap and HashSets. But rerunning query will degrade
performance but it might not happen frequently.

Once I obtain locks, for delete I just push the key into a HashSet. for
update for the given key I update the value accordingly and put in cache but
the key's transaction_id field is set to current transaction id. Hence the
old row still exists and is still visible to other transactions. Also I put
this row into newcacherows HashMap.  During Insert also I put the new row
into cache but the key has transaction_id set to present transaction id
hence will be ignored by other transactions. I put the inserted into the
newcacherows HashMap. During the commit I just put all the rows in
newcacherows into the cache with transaction_id as null replacing old rows/
new rows inserted into the cache. also delete any rows still with
transaction_id with current transaction Also delete rows from delete
HashSet. During rollback I just remove the temporary rows inserted during
transaction as rows are not actually inserted or deleted with transaction_id
as null. Also I maintain a list of locks that I acquired and during
commit/rollback I release all the locks.

Also I have a work around for the problem where select query might return
partial results of a commit. The solution is to maintain a commit_bit in row
object which is by default 0 . while inserting into cache during commit set
commit_bit to 1 and insert. and after commit i.e all rows are inserted into
cache then update the commit_bit back to 0 for all these rows. Now during
select another additional check is made 
(d)if any row has commit_bit as 1 it indicates that this row has been
inserted in the middle of a commit and hence some additional rows might get
inserted/updated which might change the query result and hence rerun the
query until no row has commit_bit as 1.

Sent from: http://apache-ignite-users.70518.x6.nabble.com/

View raw message