cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcelo Elias Del Valle <>
Subject Re: Cassandra 2.0 unbalanced ring with vnodes after adding new node
Date Thu, 05 Jun 2014 17:37:25 GMT
Actually, I have the same doubt. The same happens to me, but I guess it's
because of lack of knowledge in Cassandra vnodes, somehow...

I just added 3 nodes to my old 2 nodes cluster, now I have a 5 nodes

As rows should be in a node calculated by HASH / number of nodes, adding a
new node should move data from all other nodes to the new ones, right?
Considering I have an enough number of different row keys.

I noticed that:

   1. Even reading data with read consistency = ALL, I get the wrong
   results while the repair is not complete. Should this happen?
   2. I have run nodetool repair in each new node and nodetool cleanup in
   the 2 old nodes. There is some streaming happening, but it's really slow,
   considering my bandwith and use of SSDs.

What should I do make the data stream from the old nodes to the new ones

And everytime I add new nodes to the cluster I will have to stop my
processes that reads data from cassandra until the move is complete? Isn't
there any other way?

Best regards,

2014-06-04 13:52 GMT-03:00 Владимир Рудев <>:

> Hello to everyone!
> Please, can someone explain where we made a mistake?
> We have cluster with 4 nodes which uses vnodes(256 per node, default
> settings), snitch is default on every node: SimpleSnitch.
> These four nodes was from beginning of a cluster.
> In this cluster we have keyspace with this options:
> Keyspace: K:
>   Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
>   Durable Writes: true
>     Options: [replication_factor:3]
> All was normal and nodetool status K shows that each node owns 75% of all
> key range. All 4 nodes are located in same datacenter and have same first
> two bytes in IP address(others are different).
> Then we buy new server on different datacenter and add it to the cluster
> with same settings as in previous four nodes(difference only in
> listen_address), assuming that the effective own of each node for this
> keyspace will be 300/5=60% or near. But after 3-5 minutes after start nodetool
> status K show this:
> nodetool status K;
> Datacenter: datacenter1
> =======================
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address        Load       Tokens  Owns (effective)  Host ID
>                     Rack
> UN  N1   6,06 GB    256     50.0%
> 62f295b3-0da6-4854-a53a-f03d6b424b03  rack1
> UN  N2   5,89 GB    256     50.0%
> af4e4a23-2610-44dd-9061-09c7a6512a54  rack1
> UN  N3   6,02 GB    256     50.0%
> 0f0e4e78-6fb2-479f-ad76-477006f76795  rack1
> UN  N4   5,8 GB     256     50.0%
> 670344c0-9856-48cf-9ec9-1a98f9a89460  rack1
> UN  N5   7,51 GB    256     100.0%
>  82473d14-9e36-4ae7-86d2-a3e526efb53f  rack1
> N5 is newly added node
> nodetool repair -pr on N5 doesn't change anything
> nodetool describering K shows that new node N5 participate in EACH range.
> This is not we want at all.
> It looks like cassandra add new node to each range because it located in
> different datacenter, but all settings and output are exactly prevent this.
> Also interesting point is that while in all config files snitch is defined
> as SimpleSnitch the output of the command nodetool describecluster is:
> Cluster Information:
>         Name: Some Cluster Name
>         Snitch: org.apache.cassandra.locator.*DynamicEndpointSnitch*
>         Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>         Schema versions:
>                 26b8fa37-e666-31ed-aa3b-85be75f2aa1a: [N1, N2, N3, N4, N5]
> We use Cassandra 2.0.6
> Questions we have at this moment:
> 1. How to rebalance ring so all nodes will own 60% of range?
>    1a. Removing node from cluster and adding it again is a solution?
> 2. Where we possibly make a mistake when adding new node?
> 3. If we add new 6th node to ring it will take 50% from N5 or some portion
> from each node?
> Thanks in advance!
> --
> С уважением,
> Владимир Рудев
> (With regards, Vladimir Rudev)

View raw message