ignite-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ivan Pavlukhin (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (IGNITE-10078) Node failure during concurrent partition updates may cause partition desync between primary and backup.
Date Mon, 01 Apr 2019 10:34:00 GMT

    [ https://issues.apache.org/jira/browse/IGNITE-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16806631#comment-16806631
] 

Ivan Pavlukhin commented on IGNITE-10078:
-----------------------------------------

[~ascherbakov], also I suggest to improve terminology a little bit. In my understanding a
term _Gap_ is used both for a received counter intervals (e.g. committed tx) staying in a
queue and for missing intervals which were not received yet. I suggest to use following terms
which were assumed during introducing such concept for MVCC:
1. A _pending update_ for updates waiting an application in a queue.
2. A _gap_ for counter updates which were not received yet.

> Node failure during concurrent partition updates may cause partition desync between primary
and backup.
> -------------------------------------------------------------------------------------------------------
>
>                 Key: IGNITE-10078
>                 URL: https://issues.apache.org/jira/browse/IGNITE-10078
>             Project: Ignite
>          Issue Type: Bug
>            Reporter: Alexei Scherbakov
>            Assignee: Alexei Scherbakov
>            Priority: Major
>             Fix For: 2.8
>
>
> This is possible if some updates are not written to WAL before node failure. They will
be not applied by rebalancing due to same partition counters in certain scenario:
> 1. Start grid with 3 nodes, 2 backups.
> 2. Preload some data to partition P.
> 3. Start two concurrent transactions writing single key to the same partition P, keys
are different
> {noformat}
> try(Transaction tx = client.transactions().txStart(PESSIMISTIC, REPEATABLE_READ, 0, 1))
{
>       client.cache(DEFAULT_CACHE_NAME).put(k, v);
>       tx.commit();
> }
> {noformat}
> 4. Order updates on backup in the way such update with max partition counter is written
to WAL and update with lesser partition counter failed due to triggering of FH before it's
added to WAL
> 5. Return failed node to grid, observe no rebalancing due to same partition counters.
> Possible solution: detect gaps in update counters on recovery and force rebalance from
a node without gaps if detected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message