cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Richard Dawe <rich.d...@messagesystems.com>
Subject Should replica placement change after a topology change?
Date Wed, 09 Sep 2015 14:52:40 GMT
Good afternoon,

I am investigating various topology changes, and their effect on replica placement. As far
as I can tell, replica placement is not changing after I’ve changed the topology and run
nodetool repair + cleanup. I followed the procedure described at http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_switch_snitch.html

Here is my test scenario:

  1.  Cassandra 2.0.15
  2.  6 nodes, initially set up with SimpleSnitch, vnodes enabled, all in one data centre.
  3.  Keyspace set up with SimpleStrategy, replication factor 3.
  4.  Four rows inserted into table in keyspace, integer primary key, text value.
  5.  I shut down the cluster, switch to GossipingPropertyFileSnitch. I set up nodes 1+2 to
RAC1, 3+4 to RAC2, 5+6 to RAC3, all in data centre DC1.
  6.  Restart C* on all nodes.
  7.  Run a nodetool repair plus cleanup.
  8.  Change the keyspace to use replication strategy NetworkTopologyStrategy, RF 3 in DC1.
  9.  Run a nodetool repair plus cleanup.

To determine the token range ownership, I used “nodetool ring <keyspace>” and “nodetool
info –T <keyspace>”. I saved the output of those commands with the original topology,
after changing the topology, after repairing, after changing the replication strategy, and
then again after repairing. In no cases did the tokens change. It looks like nodetool ring
and nodetool info –T show the owner but not the replicas for a particular range.

I was expecting the replica placement to change. Because the racks were assigned in groups
(rather than alternating), I was expecting the original replica placement with SimpleStrategy
to be non-optimal after switching to NetworkTopologyStrategy. E.g.: if some data was replicated
to nodes 1, 2 and 3, then after the topology change there would be 2 replicas in RAC1, 1 in
RAC2 and none in RAC3. And hence when the repair ran, it would remove one replica from RAC1
and make sure that there was a replica in RAC3.

However, when I did a query using cqlsh at consistency QUORUM, I saw that it was hitting two
replicas in the same rack, and a replica in a different rack. This suggests that the replica
placement did not change after the topology change.

Am I missing something?

Is there some way I can see which nodes have a replica for a given token range?

Any help/insight appreciated.

Thanks, best regards, Rich


Mime
View raw message