cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joe Stump <>
Subject Re: Cassandra data loss
Date Mon, 24 May 2010 16:12:14 GMT

On May 24, 2010, at 10:01 AM, Steve Lihn wrote:

> So if I set it up to be strongly consistent, I should have the same level of consistency
as traditional relational DB ?

If you do, say, QUORUM on the consistency level it will ensure at least 2 out of the 3 replicants
have responded back that they've saved the data. RDBMS consistency and Cassandra consistency
are two different beasts. Just remember that write throughput will degrade the higher the
consistency level you use. It also makes you less tolerant to network partitioning. 

> On the other hand, what will happen if I set it up as eventual consistent? Will the data
become inconsistent after a crash/reboot, similar to the case of asynchronous replication?
Is there an automated conflict resolution algorithm in Cassandra (which will likely cause
data loss)? Or human intervention is needed?

Everything is eventually consistent in Cassandra. Period. You can read more about the ConsistencyLevel
flag for writes/reads on the API wiki page[1]. Data on a single machine is usually not inconsistent
as long as it's hit the commit log (Use a ConsistencyLevel = ONE to ensure that you're at
least in the commit log on a single node). 

What happens on crash/reboot is (my understanding) that it replays the commit log. If you
need it to be on 3 nodes it'll fire off background processes to fix said data (though I think
it does this on read via read repair).

No human intervention is needed.



View raw message