cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Coli <>
Subject Re: Nodes not added to existing cluster
Date Wed, 25 Sep 2013 19:56:11 GMT
On Wed, Sep 25, 2013 at 12:41 PM, Skye Book <> wrote:

> I have a three node cluster using the EC2 Multi-Region Snitch currently
> operating only in US-EAST.  On having a node go down this morning, I
> started a new node with an identical configuration, except for the seed
> list, the listen address and the rpc address.  The new node comes up and
> creates its own cluster rather than joining the pre-existing ring.  I've
> tried creating a node both *before* ad *after* using `nodetool remove`
> for the bad node, each time with the same result.

What version of Cassandra?

This particular confusing behavior is fixed upstream, in a version you
should not deploy to production yet. Take some solace, however, that you
may be the last Cassandra administrator to die for a broken code path!

Does anyone have any suggestions for where to look that might put me on the
> right track?

It must be that your seed list is wrong in some way, or your node state is
wrong. If you're trying to bootstrap a node, note that you can't bootstrap
a node when it is in its own seed list.

If you have installed Cassandra via debian package, there is a possibility
that your node has started before you explicitly started it. If so, it
might have invalid node state.

Have you tried wiping the data directory and trying again?

What is your seed list? Are you sure the new node can reach the seeds on
the network layer?


View raw message