I run Hadoop jobs which read data from Cassandra 1.2.8 and write results back to another tables. One of my reduce tasks was killed 2 times by job tracker, because it wasn't responding for more than 10 minutes, the 3rd attempt was succesfull.

The error message for killed reduce tasks is:

java.io.IOException: TimedOutException(acknowledged_by:0)
at org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:245)
Caused by: TimedOutException(acknowledged_by:0)
at org.apache.cassandra.thrift.Cassandra$execute_prepared_cql3_query_result.read(Cassandra.java:41884)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_execute_prepared_cql3_query(Cassandra.java:1689)
at org.apache.cassandra.thrift.Cassandra$Client.execute_prepared_cql3_query(Cassandra.java:1674)
at org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:229) ,
Task attempt_201310081258_0006_r_000000_0 failed to report status for 600 seconds. Killing!

I'm wondering how could it happen that task didn't report status for 600 seconds and how it's related to the TimedOutException at the top of the stacktrace.  The write_request_timeout_in_ms is default 10000, so it should fail much earlier.