ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Denis Mekhanikov <dmekhani...@gmail.com>
Subject Re: Cache spreading to new nodes
Date Wed, 14 Aug 2019 15:58:12 GMT
Marco,

Rebalance mode set to NONE means that your cache won’t be rebalanced at all unless you trigger
it manually.
I think, it’s better not to set it, because otherwise if you don’t trigger the rebalance,
then only one node will store the cache.

Also the backup filter specified in the affinity function doesn’t seem correct to me. It’s
always true, since your node filter accepts only those nodes, that are in the nodesForOptimization
list.

What does fetchNodes() method do?
The recommended way to implement node filters is to check custom node’s attributes using
an AttributeNodeFilter <https://static.javadoc.io/org.apache.ignite/ignite-core/2.7.5/org/apache/ignite/util/AttributeNodeFilter.html>.

Partition map exchange is a process that happens after every topology change. Nodes exchange
information about partitions distribution of caches. So, you can’t prevent it from happening.
The message, that you see is a symptom and not a cause.

Denis


> On 13 Aug 2019, at 09:50, Marco Bernagozzi <marco.bernagozzi@gmail.com> wrote:
> 
> Hi, I did some more digging and discovered that the issue seems to be: 
> 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:
Completed partition exchange 
> 
> Is there any way to disable or limit the partition exchange? 
> 
> Best, 
> Marco 
> 
> On Mon, 12 Aug 2019 at 16:59, Andrei Aleksandrov <aealexsandrov@gmail.com> wrote:
> Hi,
> 
> Could you share the whole reproducer with all configurations and required methods?
> 
> BR,
> Andrei
> 
> 8/12/2019 4:48 PM, Marco Bernagozzi пишет:
>> I have a set of nodes, and I want to be able to set a cache in specific nodes. It
works, but whenever I turn on a new node the cache is automatically spread to that node, which
then causes errors like: 
>> Failed over job to a new node ( I guess that there was a computation going on in
a node that shouldn't have computed that, and was shut down in the meantime). 
>> 
>> I don't know if I'm doing something wrong here or I'm missing something. 
>> As I understand it, NodeFilter and Affinity are equivalent in my case (Affinity is
a node filter which also creates rules on where can the cache spread from a given node?).
With rebalance mode set to NONE, shouldn't the cache be spread on the "nodesForOptimization"
nodes, according to either the node filter or the affinityFunction? 
>> 
>> Here's my code: 
>> 
>> List<UUID> nodesForOptimization = fetchNodes(); 
>> 
>> CacheConfiguration<String, Graph> graphCfg = new CacheConfiguration<>(graphCacheName);

>> graphCfg = graphCfg.setCacheMode(CacheMode.REPLICATED) 
>>             .setBackups(nodesForOptimization.size() - 1) 
>>             .setAtomicityMode(CacheAtomicityMode.ATOMIC) 
>>             .setRebalanceMode(CacheRebalanceMode.NONE) 
>>             .setStoreKeepBinary(true) 
>>             .setCopyOnRead(false) 
>>             .setOnheapCacheEnabled(false) 
>>             .setNodeFilter(u -> nodesForOptimization.contains(u.id())) 
>>             .setAffinity( 
>>                 new RendezvousAffinityFunction( 
>>                     1024, 
>>                     (c1, c2) -> nodesForOptimization.contains(c1.id()) &&
nodesForOptimization.contains(c2.id()) 
>>                 ) 
>>             ) 
>>             .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);


Mime
View raw message