Thanks for your response Rob.

 

Is step 1 just to reduce downtime for the node? Also, I’m assuming the initial_token of the new node should be set to be the same as the token of the old node, or close to that. Eg. [1] in “Replacing a Dead Node” talks about setting the new node’s intial_token to the value of the dead token – 1. (I’m not sure why the offset by 1 helps)

 

If the number of hosts with the new hardware (TBD) is different than the old, after doing what you suggested, I guess I can follow the regular steps for adding a new node/deleting a new node then.

 

Thanks,

Arindam

 

[1] http://www.datastax.com/docs/1.0/operations/cluster_management

 

 

From: Robert Coli [mailto:rcoli@eventbrite.com]
Sent: Friday, October 18, 2013 11:50 AM
To: user@cassandra.apache.org
Subject: Re: upgrading Cassandra server hardware best practice?

 

On Fri, Oct 18, 2013 at 11:39 AM, Arindam Barua <abarua@247-inc.com> wrote:

 

We currently have 2 datacenters and a ring of 5 Cassandra servers on each datacenter. We are getting new hardware, and after evaluating them, plan to upgrade the ring to the new hardware.

 

Is there any recommended procedure for doing so?

 

This is similar to the process for changing the IP address of a node, which I will soon have a canonical blog post for. Here's the reader's digest version.

 

1) Pre-copy sstables from old node to new node. Data files are immutable, so this is safe as houses to do with Cassandra running.

2) nodetool drain on old node.

3) Stop old node.

4) Copy with equivalent of rsync --delete to re-sync data directories on old and new. --delete is needed to delete any files on new which have been compacted away on old.

5) Configure new host to have auto_bootstrap:false in conf file.

6) Start new node.

 

Be sure that steps 3-6 do not take > max_hint_window_in_ms, or you will have to repair.

 

=Rob