incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sonny Heer <>
Subject Re: Timeouts running batch_mutate
Date Tue, 18 May 2010 23:37:37 GMT
Yeah there are many writes happening at the same time to any given cass node.

e.g. assume 10 machines, all running hadoop and cassandra.  The hadoop
nodes are randomly picking a cassandra node and writing directly using
the batch mutate.

After increasing the timeout even more, i don't get that exception
anymore.  But now getting UnavailableException.

The wiki states this happens when all the replicas required could be
created and/or read.  How do we resolve this problem?  the write
consistency is one.


On Sat, May 15, 2010 at 8:02 AM, Jonathan Ellis <> wrote:
> rpctimeout should be sufficient
> you can turn on debug logging to see how long it's actually taking the
> destination node to do the write (or look at cfstats, if no other
> writes are going on)
> On Fri, May 14, 2010 at 11:55 AM, Sonny Heer <> wrote:
>> Hey,
>> I'm running a map/reduce job, reading from HDFS directory, and
>> reducing to Cassandra using the batch_mutate method.
>> The reducer builds the list of rowmutations for a single row, and
>> calls batch_mutate at the end.  As I move to a larger dataset, i'm
>> seeing the following exception:
>> Caused by: TimedOutException()
>>        at org.apache.cassandra.thrift.Cassandra$
>>        at org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(
>>        at org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(
>> I changed the RpcTimeoutInMillis to 60 seconds with no changes.  What
>> configuration changes should i make when doing intensive write
>> operations using batch mutate?
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of Riptano, the source for professional Cassandra support

View raw message