cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aaron morton <aa...@thelastpickle.com>
Subject Re: Replica data distributing between racks
Date Wed, 04 May 2011 10:33:19 GMT
Eric, 
	Jonathan is suggesting the approach Jeremiah was using. 

	Calculate the tokens the nodes in each DC independantly, and then add 1 to the tokens if
there are two nodes with the same tokens. 

	In your case with 2 DC's with 2 nodes each. 

In DC 1
node 1 = 0
node 2 = 85070591730234615865843651857942052864

In DC 2
node 1 = 1
node 2 =  85070591730234615865843651857942052865

This will evenly distribute the keys in each DC, which is what the NetworkTopologyStrategy
is trying to do. 

You can make this change using nodetool move. 

Hope that helps. 

-----------------
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 4 May 2011, at 08:20, Eric tamme wrote:

> On Tue, May 3, 2011 at 4:08 PM, Jonathan Ellis <jbellis@gmail.com> wrote:
>> On Tue, May 3, 2011 at 2:46 PM, aaron morton <aaron@thelastpickle.com> wrote:
>>> Jonathan,
>>>        I think you are saying each DC should have it's own (logical) token ring.
>> 
>> Right. (Only with NTS, although you'd usually end up with a similar
>> effect if you alternate DC locations for nodes in a ONTS cluster.)
>> 
>>>        But currently two endpoints cannot have the same token regardless of the
DC they are in.
>> 
>> Also right.
>> 
>>> Or should people just bump the tokens in extra DC's to avoid the collision?
>> 
>> Yes.
>> 
> 
> 
> I am sorry, but I do not understand fully.  I would appreciate it if
> some one could explain with more verbosity for me.
> 
> I do not understand why data insertion is even, but replication is not.
> 
> I do not understand how to solve the problem.  What does "bumping"
> tokens entail - Is that going to change my insertion distribution?  I
> had no idea you can create different logical keyspaces ... and I am
> not sure what that exactly means... or that I even want to do it.  Is
> there a clear solution to "fixing" the problem I laid out, and getting
> replication data evenly distributed between racks in each DC?
> 
> Sorry again for needing more verbosity - I am learning as I go with
> this stuff.  I appreciate everyones help.
> 
> -Eric


Mime
View raw message