incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Pyeatt <>
Subject decommission of one EC2 node in cluster causes other nodes to go DOWN/UP and results in "May not be enough replicas..."
Date Mon, 21 Oct 2013 18:11:32 GMT
We have a 6 node cassandra 1.2.10 cluster running on aws with
NetworkTopologyStrategy, a replication factor of 3 and the EC2Snitch. Each
AWS availability zone has 2 nodes in it.

When we are reading or writing data with consistency of Quorum to the
cluster while decommissioning a node we are getting 'May not be enough
replicas present to handle consistency level".

This doesn't make sense because we are only taking one node down, we have
an RF of three so even if we take one node down with a quorum read/write
there should still be enough nodes with the data (2).

Looking at the cassandra log on a server that we are not decommissioning we
are seeing this during the decommission of the other node.

 INFO [GossipTasks:1] 2013-10-21 15:18:10,695 (line 803)
InetAddress / *is now DOWN*
 INFO [GossipTasks:1] 2013-10-21 15:18:10,696 (line 803)
InetAddress / *is now DOWN*
 INFO [HANDSHAKE-/] 2013-10-21 15:18:10,862 (line 399) Handshaking version with /
 INFO [GossipTasks:1] 2013-10-21 15:18:11,696 (line 803)
InetAddress /* is now DOWN*
 INFO [GossipTasks:1] 2013-10-21 15:18:11,697 (line 803)
InetAddress /* is now DOWN*
 INFO [GossipTasks:1] 2013-10-21 15:18:11,698 (line 803)
InetAddress / *is now DOWN*

Eventually we are seeing a message that looks like this.
 INFO [GossipStage:3] 2013-10-21 15:18:19,429 (line 789)
InetAddress / is now UP

for each of the nodes. So eventually the remaining nodes in the cluster
come back to life.

While these nodes are down I can see why we get the "May not be enough
replicas..." message. Because everything is down.

My question is *why does gossip shutdown for these nodes that we aren't
decommissioning in the first place*?

John Pyeatt
Singlewire Software, LLC

View raw message