incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lanny Ripple <la...@spotright.com>
Subject Re: Decommission an entire DC
Date Wed, 24 Jul 2013 19:26:25 GMT
That one is documented --
http://www.datastax.com/documentation/cassandra/1.2/index.html#cassandra/operations/ops_add_dc_to_cluster_t.html


On Wed, Jul 24, 2013 at 3:33 AM, Cyril Scetbon <cyril.scetbon@free.fr>wrote:

> And if we want to add a new DC ? I suppose we should add all nodes and
> alter the replication factor of the keyspace after that, but if anyone can
> confirm it and maybe give me some tips ?
> FYI ,we have 2 DCs with between 10 and 20 nodes in each and a 2To database
> (local replication factor included)
>
> thanks
> --
> Cyril SCETBON
>
> On Jul 24, 2013, at 12:04 AM, Omar Shibli <omar@eyeviewdigital.com> wrote:
>
> All you need to do is to decrease the replication factor of DC1 to 0, and
> then decommission the nodes one by one,
> I've tried this before and it worked with no issues.
>
> Thanks,
>
> On Tue, Jul 23, 2013 at 10:32 PM, Lanny Ripple <lanny@spotright.com>wrote:
>
>> Hi,
>>
>> We have a multi-dc setup using DC1:2, DC2:2.  We want to get rid of DC1.
>>  We're in the position where we don't need to save any of the data on DC1.
>>  We know we'll lose a (tiny.  already checked) bit of data but our
>> processing is such that we'll recover over time.
>>
>> How do we drop DC1 and just move forward with DC2?  Using nodetool
>> decommision or removetoken looks like we'll eventually end up with a single
>> DC1 node containing the entire dc's data which would be slow and costly.
>>
>> We've speculated that setting DC1:0 or removing it from the schema would
>> do the trick but without finding any hits during searching on that idea I
>> hesitate to just do it.  We can drop DC1s data but have to keep a working
>> ring in DC2.
>>
>>
>
>

Mime
View raw message