cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dominik Petrovic <>
Subject Re[4]: Modify keyspace replication strategy and rebalance the nodes
Date Thu, 14 Sep 2017 17:36:04 GMT
I'm using 3 availability zones, during the life of the cluster we lost nodes, retired others
and we end up having some of the data written/replicated on a single availability zone. We
saw it with nodetool getendpoints.

>Thursday, September 14, 2017 9:23 AM -07:00 from Jeff Jirsa <>:
>With one datacenter/region, what did you discover in an outage you think you'll solve
with network topology strategy? It should be equivalent for a single D.C. 
>Jeff Jirsa
>On Sep 14, 2017, at 8:47 AM, Dominik Petrovic < >
>>Thank you for the replies!
>>@jeff my current cluster details are:
>>1 datacenter
>>40 nodes, with vnodes=256
>>What is your advice? is it a production cluster, so I need to be very careful about
>>>Thu, 14 Sep 2017 -2:47:52 -0700 from Jeff Jirsa < >:
>>>The token distribution isn't going to change - the way Cassandra maps replicas
will change. 
>>>How many data centers/regions will you have when you're done? What's your RF now?
You definitely need to run repair before you ALTER, but you've got a bit of a race here between
the repairs and the ALTER, which you MAY be able to work around if we know more about your
>>>How many nodes
>>>How many regions
>>>How many replicas per region when you're done?
>>>Jeff Jirsa
>>>On Sep 13, 2017, at 2:04 PM, Dominik Petrovic <
> wrote:
>>>>Dear community,
>>>>I'd like to receive additional info on how to modify a keyspace replication
>>>>My Cassandra cluster is on AWS, Cassandra 2.1.15 using vnodes, the cluster's
snitch is configured to Ec2Snitch, but the keyspace the developers created has replication
class SimpleStrategy = 3.
>>>>During an outage last week we realized the discrepancy between the configuration
and we would now fix the issue using NetworkTopologyStrategy. 
>>>>What are the suggested steps to perform?
>>>>For Cassandra 2.1 I found only this doc:
>>>>that does not mention anything about repairing the cluster
>>>>For Cassandra 3 I found this other doc:
>>>>That involves also the cluster repair operation.
>>>>On a test cluster I tried the steps for Cassandra 2.1 but the token distribution in
the ring didn't change so I'm assuming that wasn't the right think to do.
>>>>I also perform a nodetool repair -pr but nothing changed as well.
>>>>Some advice?
>>>>Dominik Petrovic
>>Dominik Petrovic

Dominik Petrovic
View raw message