incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Edmond Lau <edm...@ooyala.com>
Subject Re: can't write with consistency level of one after some nodes fail
Date Thu, 29 Oct 2009 20:38:54 GMT
On Thu, Oct 29, 2009 at 1:20 PM, Jonathan Ellis <jbellis@gmail.com> wrote:

> On Thu, Oct 29, 2009 at 1:18 PM, Edmond Lau <edmond@ooyala.com> wrote:
> > I have a freshly started 3-node cluster with a replication factor of
> > 2.  If I take down two nodes, I can no longer do any writes, even with
> > a consistency level of one.  I tried on a variety of keys to ensure
> > that I'd get at least one where the live node was responsible for one
> > of the replicas.  I have not yet tried on trunk.  On cassandra 0.4.1,
> > I get an UnavailableException.
>
> This sounds like the bug we fixed in CASSANDRA-496 on trunk.
>

Excellent - thanks.  Time to start using trunk.


>
> > Along the same lines, how does Cassandra handle network partitioning
> > where 2 writes for the same keys hit 2 different partitions, neither
> > of which are able to form a quorum?  Dynamo maintained version vectors
> > and put the burden on the client to resolve conflicts, but there's no
> > similar interface in the thrift api.
>
> If you use QUORUM or ALL consistency, neither write will succeed.  If
> you use ONE, both will, and the one with the higher timestamp will
> "win" when the partition heals.
>

Got it.


>
> -Jonathan
>

Edmond

Mime
View raw message