ignite-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Semen Boikov (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution
Date Fri, 01 Jul 2016 08:34:10 GMT

    [ https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15358657#comment-15358657
] 

Semen Boikov commented on IGNITE-2310:
--------------------------------------

Reviewed, my comments:
- please add test for ScanQuery
- please add test for SqlQuery
- please add test for both affinityRun methods, for both affinityCall methods
- please add some tests to check that partitions are released after job execution in following
scenarios: job complets normally, job throws exception, job throws Error, node which sent
job request failed while job was running + the same case where job implements ComputeJobMasterLeaveAware,
job unmarshalling fails
- ignite support CollisionSpi, please add some tests to check that affinityCall is not broken
when CollisionSpi is used
- I looked at local sql query execution, indexing uses special 'backup filter' to filter out
backup keys, now for this filter it uses current affinity version, probably this is causes
test failure. To fix it you need to add affinity version which was used to map job  in request
and pass this version to query backup filter (for this use can use some ThreadLocal context)

> Lock cache partition for affinityRun/affinityCall execution
> -----------------------------------------------------------
>
>                 Key: IGNITE-2310
>                 URL: https://issues.apache.org/jira/browse/IGNITE-2310
>             Project: Ignite
>          Issue Type: New Feature
>          Components: cache
>            Reporter: Valentin Kulichenko
>            Assignee: Taras Ledkov
>            Priority: Critical
>              Labels: community
>             Fix For: 1.7
>
>
> Partition of a key passed to {{affinityRun}} must be located on the affinity node when
a compute job is being sent to the node. The partition has to be locked on the cache until
the compute job is being executed. This will let to execute queries safely (Scan or local
SQL) over the data that is located locally in the locked partition.
> In addition Ignite Compute API has to be extended by adding {{affinityCall}} and {{affinityRun}}
methods that accept list of caches which partitions have to be locked at the time a compute
task is being executed.
> Test cases to validate the functionality:
> 1) local SQL query over data located in a concrete partition in multple caches.
> - create cache Organisation cache and create Persons cache.
> - collocate Persons by 'organisationID';
> - send {{affinityRun}} using 'organisationID' as an affinity key and passing Organisation
and Persons caches' names to the method to be sure that the partition will be locked on caches;
> - execute local SQL query "SELECT * FROM Persons as p, Organisation as o WHERE p.orgId=o.id'
on a changing topology. The result set must be complete, the partition over which the query
will be executed mustn't be moved to the other node. Due to affinity collocation the partition
number will be the same for all Persons that belong to particular 'organisationID'
> 2) Scan Query over particular partition that is locked when {{affinityCall}} is executed.
 
> UPD (YZ May, 31)
> # If closure arrives to node but partition is not there it should be silently failed
over to current owner.
> # I don't think user should provide list of caches. How about reserving only one partition,
but evict partitions after all partitions in all caches (with same affinity function) on this
node are locked for eviction. [~sboikov], can you please comment? It seems this should work
faster for closures and will hardly affect rebalancing stuff.
> # I would add method {{affinityCall(int partId, String cacheName, IgniteCallable)}} and
same for Runnable. This will allow me not to mess with affinity key in case I know partition
before.
> UPD (SB, June, 01)
> Yakov, I think it is possible to implement this 'locking for evictions' approach, but
personally I better like partitions reservation:
> - approach with reservation already implemented and works fine in sql queries
> - partition reservation is just CAS operation, if we need do ~10 reservation I think
this will be negligible comparing to  job execution time
> - now caches are rebalanced completely independently and changing this be complicated
refactoring
> - I see some difficulties how to understand that caches have same affinity. If user uses
custom function should he implement 'equals'? For standard affinity functions user can set
backup filter, what do in this case? should user implement 'equals' for filter? Even if affinity
functions are the same cache configuration can have node filter, so affinity mapping will
be different. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message