ignite-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vladimir Ershov (JIRA)" <j...@apache.org>
Subject [jira] [Assigned] (IGNITE-1605) Provide stronger data loss check
Date Thu, 19 Nov 2015 13:59:10 GMT

     [ https://issues.apache.org/jira/browse/IGNITE-1605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Vladimir Ershov reassigned IGNITE-1605:

    Assignee: Vladimir Ershov  (was: Alexey Goncharuk)

> Provide stronger data loss check
> --------------------------------
>                 Key: IGNITE-1605
>                 URL: https://issues.apache.org/jira/browse/IGNITE-1605
>             Project: Ignite
>          Issue Type: Task
>            Reporter: Yakov Zhdanov
>            Assignee: Vladimir Ershov
> Need to provide stronger data loss check.
> Currently node can fire event - EVT_CACHE_REBALANCE_PART_DATA_LOST
> However, this is not enough since if there is strong requirement on application behavior
on data loss e.g. further cache updates should throw exception - this requirement cannot currently
be met even with use of cache interceptor.
> Suggestions:
> * Introduce CacheDataLossPolicy enum: FAIL_OPS, NOOP and put it to configuration
> * If node fires PART_LOST_EVT then any update to lost partition will throw (or will not
throw) exception according to DataLossPolicy
> * ForceKeysRequest should be completed with exception (if plc == FAIL) if all nodes to
request from are gone. So, all gets/puts/txs should fail.
> * Add a public API method in order to allow a recovery from a failed state.
> Another solution is to detect partition loss at the time partition exchange completes.
Since we hold topology lock during the exchange, we can easily check that there are no owners
for a partition and act as a topology validator in case FAIL policy is configured. There is
one thing needed to be carefully analyzed: demand worker should not park partition as owning
in case last owner leaves grid before the corresponding exchange completes.

This message was sent by Atlassian JIRA

View raw message