cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dominic Williams <>
Subject Re: 99.999% uptime - Operations Best Practices?
Date Thu, 23 Jun 2011 12:59:43 GMT

Cassandra is a good system, but it has not reached version 1.0 yet, nor has
HBase etc. It is cutting edge technology and therefore in practice you are
unlikely to achieve five nines immediately - even if in theory with perfect
planning, perfect administration and so on, this should be achievable even

The reasons you might choose Cassandra are:-
1. New more flexible data model that may increase developer productivity and
lead to fast release cycle
2. Superior capability as concerns being able to *write* large volumes of
data, which is incredibly useful in many applications
3. Horizontal scalability, where you can add nodes rather than buying bigger
4. Data redundancy, which means you have a kind of live backup going on a
bit like RAID - we use replication factor 3 for example
5. Due to the redundancy of data across the cluster, the ability to perform
rolling restarts to administer and upgrade your nodes while the cluster
continues to run (yes, this is the feature that in theory allows for
continual operation, but in practice until we reach 1.0 I don't think five
nines of uptime is always possible in every scenario yet because of
deficiencies that may present themselves unexpectedly)
6. The benefit of building your new product on a platform designed to solve
many modern computing challenges that will give you a better upgrade path
e.g. for example in future when you grow you won't have to change over from
SQL to NoSQL because you're already on it!

These are pretty compelling arguments, but you have to be realistic about
where Cassandra is right now. For what it's worth though, you might also
consider how easy it is to screw up databases running on commercial
production software that are handling very large amounts of data (just let
the volumes handling the commit log run short of disk space for example).
Setting up a Cassandra cluster is the simplest way to handle big data I've
seen and this reduction in complexity will also contribute to uptime.

Best, Dominic

On 22 June 2011 22:24, Les Hazlewood <> wrote:

> I'm planning on using Cassandra as a product's core data store, and it is
> imperative that it never goes down or loses data, even in the event of a
> data center failure.  This uptime requirement ("five nines": 99.999% uptime)
> w/ WAN capabilities is largely what led me to choose Cassandra over other
> NoSQL products, given its history and 'from the ground up' design for such
> operational benefits.
> However, in a recent thread, a user indicated that all 4 of 4 of his
> Cassandra instances were down because the OS killed the Java processes due
> to memory starvation, and all 4 instances went down in a relatively short
> period of time of each other.  Another user helped out and replied that
> running 0.8 and nodetool repair on each node regularly via a cron job (once
> a day?) seems to work for him.
> Naturally this was disconcerting to read, given our needs for a Highly
> Available product - we'd be royally screwed if this ever happened to us.
>  But given Cassandra's history and it's current production use, I'm aware
> that this HA/uptime is being achieved today, and I believe it is certainly
> achievable.
> So, is there a collective set of guidelines or best practices to ensure
> this problem (or unavailability due to OOM) can be easily managed?
> Things like memory settings, initial GC recommendations, cron
> recommendations, ulimit settings, etc. that can be bundled up as a
> best-practices "Production Kickstart"?
> Could anyone share their nuggets of wisdom or point me to resources where
> this may already exist?
> Thanks!
> Best regards,
> Les

View raw message