I am re-installing on the same machine (the IP stays the same), but need to wipe all the disks for the OS switch.
When you say copy all the data and config in step 2 below, is that for backup reasons, or do I need to actually copy it back manually to the box? When the node joins back the cluster, should it not automatically get the data via bootstrapping?
>> The error you got says that the schema was not replicated. Check that the node is part of cluster and check for a split schema using cassandra-cli (FAQ on the wiki has help for split schema)
nodetool always reports it joins back the cluster fine. cassandra-cli does not report split schema.
Are you move the node to a new machine or re-installing on the same machine ?
If it's the former then:
* shut it down cleanly
* copy all the data and config
* update the yaml with the new IP for list_address, rpc_address and seed_list
* restart the node
The error you got says that the schema was not replicated. Check that the node is part of cluster and check for a split schema using cassandra-cli (FAQ on the wiki has help for split schema)
On 27/06/2013, at 8:57 AM, Arindam Barua <email@example.com> wrote:
Thanks for your response.
Are there any other general comments on the steps we are taking to decommission and join back the node. I'm assuming if we do specify a token, we should specify exactly the same token when we add that node back.
From: Robert Coli [mailto:firstname.lastname@example.org]
Sent: Tuesday, June 25, 2013 11:15 AM
Subject: Re: Problems with node rejoining cluster
On Mon, Jun 24, 2013 at 11:19 PM, Arindam Barua <email@example.com> wrote:
- We do not specify any tokens in cassandra.yaml relying on
bootstrap assigning the tokens automatically.
As cassandra.yaml comments state, you should never ever do this in a real cluster.
I don't know what is causing your underlying issue, but not-specifying tokens is a strong contender.