incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Theroux <mthero...@yahoo.com>
Subject Re: Moving cluster
Date Thu, 18 Apr 2013 16:10:34 GMT
This should work.  

Another option is to follow a process similar to what we recently did.  We recently and successfully
upgraded 12 instances from large to xlarge instances in AWS.  I chose not to replace nodes
as restoring data from the ring would have taken significant time and put the cluster under
some additional load.  I also wanted to eliminate the possibility that any issues on the new
nodes could be blamed on new configuration/operating system differences.  Instead we followed
the following procedure (removing some details that would likely be unique to our infrastructure).

For a node being upgraded:

1) nodetool disable thrift 
2) nodetool disable gossip
3) Snapshot the data (nodetool snapshot ...)
4) Backup the snapshot data to EBS (assuming you are on ephemeral)
5) Stop cassandra
6) Move the cassandra.yaml configuration file to cassandra.yaml.bak (to prevent any future
restarts to cause cassandra to restart)
7) Shutdown the instance
8) Take an AMI of the instance
9) Start a new instance from the AMI with the desired hardware
10) If you assign the new instance a new IP Address, make sure any entries in /etc/hosts,
or the broadcast_address in cassandra.yaml is updated
11) Attach the volume you backed up your snapshot data to to the new instance and mount it
12) Restore the snapshot data
13) Restore cassandra.yaml file
13) Restart cassandra

- I recommend practicing this on a test cluster first
- As you replace nodes with new IP Addresses, eventually all your seeds will need be updated.
 This is not a big deal until all your seed nodes have been replaced.
- Don't forget about NTP!  Make sure it is running on all your new nodes.  Myself, to be extra
careful, I actually deleted the ntp drift file and let NTP recalculate it because its a new
instance, and it took over an hour to restore our snapshot data... but that may have been
overkill.
- If you have the opportunity, depending on your situation, increase the max_hint_window_in_ms
- Your details may vary

Thanks,
-Mike

On Apr 18, 2013, at 11:07 AM, Alain RODRIGUEZ wrote:

> I would say add your 3 servers to the 3 tokens where you want them, let's say :
> 
> {
>     "0": {
>         "0": 0,
>         "1": 56713727820156410577229101238628035242,
>         "2": 113427455640312821154458202477256070485
>     }
> }
> 
> or these token -1 or +1 if you already have these token used. And then just decommission
x1Large nodes. You should be good to go.
> 
> 
> 
> 2013/4/18 Kais Ahmed <kais@neteck-fr.com>
> Hi,
> 
> What is the best pratice to move from a cluster of 7 nodes (m1.xlarge) to 3 nodes (hi1.4xlarge).
> 
> Thanks,
> 


Mime
View raw message