cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Araujo <cassandra-us...@alex.otherinbox.com>
Subject Re: Ec2 Stress Results
Date Thu, 12 May 2011 02:12:48 GMT
Hey Adrian -
> Why did you choose four big instances rather than more smaller ones?
Mostly to see the impact of additional CPUs on a write only load.  The 
portion of the application we're migrating from MySQL is very write 
intensive.  The other 8 core option was c1.xl with 7GB of RAM.  I will 
very likely need more than that once I add reads as some things can 
benefit significantly from the row cache.  I also thought that m2.4xls 
would come with 4 disks instead of two.
> For $8/hr you get four m2.4xl with a total of 8 disks.
> For $8.16/hr you could have twelve m1.xl with a total of 48 disks, 3x
> disk space, a bit less total RAM and much more CPU
>
> When an instance fails, you have a 25% loss of capacity with 4 or an
> 8% loss of capacity with 12.
>
> I don't think it makes sense (especially on EC2) to run fewer than 6
> instances, we are mostly starting at 12-15.
> We can also spread the instances over three EC2 availability zones,
> with RF=3 and one copy of the data in each zone.
Agree on all points.  The reason I'm keeping the cluster small now is to 
more easily monitor what's going on/find where things break down.  
Eventually it will be an 8+ node cluster spread across AZs as you 
mentioned (and likely m2.4xls as they do seem to provide the most 
value/$ for this type of system).

I'm interested in hearing about your experience(s) and will continue to 
share mine.  Alex.

Mime
View raw message