cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Josep Blanquer <blanq...@rightscale.com>
Subject Re: Backup/Restore: Coordinating Cassandra Nodetool Snapshots with Amazon EBS Snapshots?
Date Thu, 23 Jun 2011 15:53:59 GMT
On Thu, Jun 23, 2011 at 8:02 AM, William Oberman
<oberman@civicscience.com>wrote:

> I've been doing EBS snapshots for mysql for some time now, and was using a
> similar pattern as Josep (XFS with freeze, snap, unfreeze), with the extra
> complication that I was actually using 8 EBS's in RAID-0 (and the extra
> extra complication that I had to lock the MyISAM tables... glad to be moving
> away from that).  For cassandra I switched to ephemeral disks, as per
> recommendations from this forum.
>
> yes, if you want to consistently snap MySQL you need to get it into a
consistent state, so you need to do the whole FLUSH TABLES WITH READ LOCK
yadda yadda, on top of the rest. Otherwise you might snapshot something that
is not correct/consistent...and it's a bit more tricky with snapshotting
slaves, since you need to know where they are in the replication
stream...etc



> One note on EBS snapshots though: the last time I checked (which was some
> time ago) I noticed degraded IO performance on the box during the
> snapshotting process even though the take snapshot command returns almost
> immediately.  My theory back then was that amazon does the
> delta/compress/store "outside" of the VM, but it obviously has an effect on
> resources on the box the VM runs on.  I was doing this on a mysql slave that
> no one talked to, so I didn't care/bother looking into it further.
>
>
Yes, that is correct. The underlying copy-on-write-and-ship-to-EBS/S3 does
has some performance impact  on the running box. For the most part it's
never presented a problem for us or many of our customers, although you're
right, it's something you want to know about and have in mind when designing
your system (for example for snapshot slaves much more often than masters,
and do masters when the traffic is low, stagger cassandra snaps...yadda
yadda).
If you think about it, this effect is not that different from using LVM
snaps on the ephemeral, and then moving the data from the snap to another
disk or a remote storage...moving those blocks it would have an impact on
the original LVM volume since it's reading the same physical (ephemeral)
disk/s underneath (list of clean and dirty blocks).

One case I could see the slightly reduced IO performance being problematic
if your DB/storage is already at the edge of I/O capacity...but in that
case, the small overhead of a snapshots is probably the least of your
problems :) EBS slowness or malfunction can also impact the instance,
obviously, although that is not only related to snapshots, since it can
impact the actual volume regardless.

 Josep M.

Mime
View raw message