incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Edmond Lau <>
Subject Re: can't write with consistency level of one after some nodes fail
Date Thu, 29 Oct 2009 21:37:08 GMT
I've updated to trunk, and I'm still hitting the same issue but it's
manifesting itself differently.  Again, I'm running with a freshly
started 3-node cluster with a replication factor of 2.  I then take
down two nodes.

If I write with a consistency level of ONE on any key, I get an

ERROR [pool-1-thread-45] 2009-10-29 21:27:10,120
(line 183) error writing key 1
InvalidRequestException(why:Cannot block for less than one replica)
        at org.apache.cassandra.service.QuorumResponseHandler.<init>(
        at org.apache.cassandra.locator.AbstractReplicationStrategy.getResponseHandler(
        at org.apache.cassandra.service.StorageService.getResponseHandler(
        at org.apache.cassandra.service.StorageProxy.insertBlocking(
        at org.apache.cassandra.service.CassandraServer.doInsert(
        at org.apache.cassandra.service.CassandraServer.insert(
        at org.apache.cassandra.service.Cassandra$Processor$insert.process(
        at org.apache.cassandra.service.Cassandra$Processor.process(
        at org.apache.thrift.server.TThreadPoolServer$
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(
        at java.util.concurrent.ThreadPoolExecutor$

Oddly, a write with a consistency level of QUORUM succeeds for certain
keys (but fails with others) even though I only have one live node.


On Thu, Oct 29, 2009 at 1:38 PM, Edmond Lau <> wrote:
> On Thu, Oct 29, 2009 at 1:20 PM, Jonathan Ellis <> wrote:
>> On Thu, Oct 29, 2009 at 1:18 PM, Edmond Lau <> wrote:
>> > I have a freshly started 3-node cluster with a replication factor of
>> > 2.  If I take down two nodes, I can no longer do any writes, even with
>> > a consistency level of one.  I tried on a variety of keys to ensure
>> > that I'd get at least one where the live node was responsible for one
>> > of the replicas.  I have not yet tried on trunk.  On cassandra 0.4.1,
>> > I get an UnavailableException.
>> This sounds like the bug we fixed in CASSANDRA-496 on trunk.
> Excellent - thanks.  Time to start using trunk.
>> > Along the same lines, how does Cassandra handle network partitioning
>> > where 2 writes for the same keys hit 2 different partitions, neither
>> > of which are able to form a quorum?  Dynamo maintained version vectors
>> > and put the burden on the client to resolve conflicts, but there's no
>> > similar interface in the thrift api.
>> If you use QUORUM or ALL consistency, neither write will succeed.  If
>> you use ONE, both will, and the one with the higher timestamp will
>> "win" when the partition heals.
> Got it.
>> -Jonathan
> Edmond

View raw message