incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jedd Rashbrooke <>
Subject On 0.6.6 to 0.7.3 migration, DC-aware traffic and minimising data transfer
Date Thu, 10 Mar 2011 12:06:37 GMT

 Assortment of questions relating to an upgrade combined with a
 possible migration between Data Centers (or perhaps a multi-DC
 redesign).  Apologies if some of these have been asked before - I
 have kept half an eye on the list in recent times but haven't seen
 anything covering these particular aspects.

 Upgrade path:
 We're running a 16 node cluster on Amazon EC2, in a single DC
 (US) using 0.6.6.  We didn't do the 0.6.x upgrades mostly because
 things have 'just worked' (and it took a while to get to that stage).
 My question is whether it's considered safer to upgrade via 0.6.12
 to 0.7, or if a direct 0.6.6 -> 0.7 upgrade is safe enough?

 Copying a cluster between AWS DC's:
 We have ~ 150-250GB per node, with a Replication Factor of 4.
 I ack that 0.6 -> 0.7 is necessarily STW, so in an attempt to
 minimise that outage period I was wondering if it's possible to
 drain & stop the cluster, then copy over only the 1st, 5th, 9th,
 and 13th nodes' worth of data (which should be a full copy of
 all our actual data - we are nicely partitioned, despite the
 disparity in GB per node) and have Cassandra re-populate the
 new destination 16 nodes from those four data sets.  If this is
 feasible, is it likely to be more expensive (in terms of time the
 new cluster is unresponsive as it rebuilds) than just copying
 across all 16 sets of data - about 2.7TB.

 Chattiness / gossip traffic requirements on DC-aware:
 I haven't pondered deeply on a 7 design yet, so this question is
 even more nebulous.  We're seeing growth (raw) of about 100GB
 per month on our 16 node RF4 cluster - say about 25GB of 'actual'
 data growth.  We don't delete (much) data.  Amazon's calculator
 suggests even 100GB in/out of a data center is modestly priced,
 but I'm cautious in case the replication traffic is particularly chatty
 or excessive.  And how expensive (in terms of traffic) a compaction
 or repair would be across data centers.  Has anyone had any
 experience with an EC2 cluster running 0.7 and traversing the
 pond?  Either in terms of traffic to cluster size, or $-cost to cluster
 size ratios would be fantastic.


View raw message