cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Roshan <>
Subject Decommission nodes starts to appear from one node (1.0.11)
Date Thu, 16 May 2013 06:42:06 GMT

Couple of months back, I removed 2 nodes from the live production cluster
and introduce another 2 (all now 4)  to the cluster. That time, the nodetool
ring show the correct 4 nodes. 

After couple of months, today I restarted one node in our cluster after
increasing the number of cores. Suddenly from this node, nodetool ring shows
the already Decommission nodes. Please see the nodetool ring output.

Address         DC          Rack        Status State   Load            Owns   
127605887595351923798765477786913079296    datacenter1 rack1       Up     Normal  1.52 GB        
25.00%  0                                      datacenter1 rack1       Down   Normal  ?              
16.67%  28356863910078203714492389662765613056    datacenter1 rack1       Up     Normal  1.53 GB         8.33%  
42535295865117307932921825928971026432    datacenter1 rack1       Down   Normal  ?              
15.00%  68056473384187696470568107782069813248    datacenter1 rack1       Up     Normal  1.62 GB        
10.00%  85070591730234615865843651857942052864    datacenter1 rack1       Up     Normal  1.69 GB        
25.00%  127605887595351923798765477786913079296  

You can see and shows down, because they
Decommission earlier. 

I am using hector as the client to connect to Cassandra.
I have 2 keyspaces and both having replication_factor is 4. Consistency
level is default. Due to this situation, I got the below error as well.

me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be
enough replicas present to handle consistency level.

Why all these misleading behaviors? 


View this message in context:
Sent from the mailing list archive at

View raw message