ignite-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Denis Magda (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (IGNITE-2655) AffinityFunction: primary and backup copies in different locations
Date Tue, 31 May 2016 11:24:12 GMT

    [ https://issues.apache.org/jira/browse/IGNITE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307586#comment-15307586

Denis Magda commented on IGNITE-2655:

Dmitriy, it means that in case of {{FairAffinityFunction}} the method can check both the primary
and the backup against already assigned nodes. The already assigned nodes may or may node
contains the primary.

Vlad, I think that we can preserve the same semantic and behavior at the level of {{FairAffinictyFunction}}
if do the following at the implementation level:
- if {{tier=0}} is checked (primary) then we prepare new assignments list that will have the
primary, that is being checked, first in the list and the rest of the nodes will be the nodes
that are already assigned (backups);
- after that we're iterating over a sublist calling {{affinityBackupFilter.apply(...)}} for
every backup from the list with assignments. During the iteration if we get {{false}} for
at least backup then it means that the primary is non assignable.

Such implementation will help us to preserve the same semantic as {{RendezvousAffinityFunction}}
n - potential backup to check
assigned - list of current partition holders (first node in the list is primary)

> AffinityFunction: primary and backup copies in different locations
> ------------------------------------------------------------------
>                 Key: IGNITE-2655
>                 URL: https://issues.apache.org/jira/browse/IGNITE-2655
>             Project: Ignite
>          Issue Type: Bug
>            Reporter: Denis Magda
>            Assignee: Vladislav Pyatkov
>            Priority: Critical
>              Labels: important
>             Fix For: 1.7
> There is a use case when primary and backup copies have to be located in different racks,
building, cities, etc.
> A simple scenario is the following. When nodes are started they will have either "rack1"
or "rack2" value in their attributes list and we will enforce that the backups won't be selected
among the nodes with the same attribute.
> It should be possible to filter out backups using IP addresses as well.
> Presently rendezvous and fair affinity function has {{backupFilter}} that will work perfectly
for the scenario above but only for cases when number of backups for a cache is equal to 1.
> In case when the number of backups is bigger than one {{backupFilter}} will only guarantee
that the primary is located in different location but will NOT guarantee that all the backups
are spread out across different locations as well.
> So we need to provide an API that will allow to spread the primary and ALL backups copies
across different locations.
> The proposal is to introduce {{AffinityBackupFilter}} with the following method
> {{AffinityBackupFilter.isAssignable(Node n, List<Node> assigned)}}
> Where n - potential backup to check, assigned - list of current partition holders, 1st
is primary
> {{AffinityBackupFilter}} will be set using {{affinity.setAffinityBackupFilter}}.
> {{Affinity.setBackupFilter}} has to be deprecated.

This message was sent by Atlassian JIRA

View raw message