incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kanwar Sangha <>
Subject RE: Mutation dropped
Date Mon, 18 Feb 2013 12:48:48 GMT
Thanks Aaron.

Does the rpc_timeout not control the client timeout ? Is there any param which is configurable
to control the replication timeout between nodes ? Or the same param is used to control that
since the other node is also like a client ?

From: aaron morton []
Sent: 17 February 2013 11:26
Subject: Re: Mutation dropped

You are hitting the maximum throughput on the cluster.

The messages are dropped because the node fails to start processing them before rpc_timeout.

However the request is still a success because the client requested CL was achieved.

Testing with RF 2 and CL 1 really just tests the disks on one local machine. Both nodes replicate
each row, and writes are sent to each replica, so the only thing the client is waiting on
is the local node to write to it's commit log.

Testing with (and running in prod) RF3 and CL QUROUM is a more real world scenario.


Aaron Morton
Freelance Cassandra Developer
New Zealand


On 15/02/2013, at 9:42 AM, Kanwar Sangha <<>>

Hi - Is there a parameter which can be tuned to prevent the mutations from being dropped ?
Is this logic correct ?

Node A and B with RF=2, CL =1. Load balanced between the two.

--  Address           Load       Tokens  Owns (effective)  Host ID                       
UN  10.x.x.x       746.78 GB  256     100.0%            dbc9e539-f735-4b0b-8067-b97a85522a1a
UN  10.x.x.x       880.77 GB  256     100.0%            95d59054-be99-455f-90d1-f43981d3d778

Once we hit a very high TPS (around 50k/sec of inserts), the nodes start falling behind and
we see the mutation dropped messages. But there are no failures on the client. Does that mean
other node is not able to persist the replicated data ? Is there some timeout associated with
replicated data persistence ?


From: Kanwar Sangha [<>]
Sent: 14 February 2013 09:08
Subject: Mutation dropped

Hi - I am doing a load test using YCSB across 2 nodes in a cluster and seeing a lot of mutation
dropped messages.  I understand that this is due to the replica not being written to the
other node ? RF = 2, CL =1.

>From the wiki -
For MUTATION messages this means that the mutation was not applied to all replicas it was
sent to. The inconsistency will be repaired by Read Repair or Anti Entropy Repair


View raw message