incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chong Zhang <chongz.zh...@gmail.com>
Subject Re: tokens and RF for multiple phases of deployment
Date Fri, 01 Jun 2012 18:59:32 GMT
I followed the doc to add the new node. After the nodetool repair, the
'Load' on the new node in DC2 increased to 250M. But the 'Owns' col are
still 50%, 50%, 0%, and I guess it's OK because the new token value is 1?

Thanks,
Chong
On Thu, May 31, 2012 at 9:52 PM, aaron morton <aaron@thelastpickle.com>wrote:

> The ring (2 in DC1, 1 in DC2) looks OK, but the load on the new node in
> DC2 is almost 0%.
>
> yeah, thats the way it will look.
>
> But all the other rows are not in the new node. Do I need to copy the data
> files from a node in DC1 to the new node?
>
> How did you add the node ? (see
> http://www.datastax.com/docs/1.0/operations/cluster_management#adding-nodes-to-a-cluster
> )
>
> if in doubt run nodetool repair on the new node.
>
> Cheers
>
>
> -----------------
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 1/06/2012, at 3:46 AM, Chong Zhang wrote:
>
> Thanks Aaron.
>
> I might use LOCAL_QUORUM to avoid the waiting on the ack from DC2.
>
> Another question, after I setup a new node with token +1 in a new DC,  and
> updated a CF with RF {DC1:2, DC2:1}. When i update a column on one node in
> DC1, it's also updated in the new node in DC2. But all the other rows are
> not in the new node. Do I need to copy the data files from a node in DC1 to
> the new node?
>
> The ring (2 in DC1, 1 in DC2) looks OK, but the load on the new node in
> DC2 is almost 0%.
>
> Address         DC          Rack        Status State   Load
>  Owns    Token
>
>      85070591730234615865843651857942052864
> 10.10.10.1    DC1         RAC1        Up     Normal  313.99 MB
> 50.00%  0
> 10.10.10.3    DC2         RAC1        Up     Normal  7.07 MB
> 0.00%   1
> 10.10.10.2    DC1         RAC1        Up     Normal  288.91 MB
> 50.00%  85070591730234615865843651857942052864
>
> Thanks,
> Chong
>
> On Thu, May 31, 2012 at 5:48 AM, aaron morton <aaron@thelastpickle.com>wrote:
>
>>
>> Could you provide some guide on how to assign the tokens in this growing
>> deployment phases?
>>
>>
>> background
>> http://www.datastax.com/docs/1.0/install/cluster_init#calculating-tokens-for-a-multi-data-center-cluster
>>
>> Start with tokens for a 4 node cluster. Add the next 4 between between
>> each of the ranges. Add 8 in the new DC to have the same tokens as the
>> first DC +1
>>
>> Also if we use the same RF (3) in both DC, and use EACH_QUORUM for write
>> and LOCAL_QUORUM for read, can the read also reach to the 2nd cluster?
>>
>> No. It will fail if there are not enough nodes available in the first DC.
>>
>> We'd like to keep both write and read on the same cluster.
>>
>> Writes go to all replicas. Using EACH_QUORUM means the client in the
>> first DC will be waiting for the quorum from the second DC to ack the
>> write.
>>
>>
>> Cheers
>>   -----------------
>> Aaron Morton
>> Freelance Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>>
>> On 31/05/2012, at 3:20 AM, Chong Zhang wrote:
>>
>> Hi all,
>>
>> We are planning to deploy a small cluster with 4 nodes in one DC first,
>> and will expend that to 8 nodes, then add another DC with 8 nodes for fail
>> over (not active-active), so all the traffic will go to the 1st cluster,
>> and switch to 2nd cluster if the whole 1st cluster is down or
>> on maintenance.
>>
>> Could you provide some guide on how to assign the tokens in this growing
>> deployment phases? I looked at some docs but not very clear on how to
>> assign tokens on the fail-over case.
>> Also if we use the same RF (3) in both DC, and use EACH_QUORUM for write
>> and LOCAL_QUORUM for read, can the read also reach to the 2nd cluster?
>> We'd like to keep both write and read on the same cluster.
>>
>> Thanks in advance,
>> Chong
>>
>>
>>
>
>

Mime
View raw message