cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Morton <>
Subject Re: Timeout Errors while running Hadoop over Cassandra
Date Wed, 12 Jan 2011 22:08:36 GMT
Whats happening in the cassandra server logs when you get these errors? 

Reading through the hadoop 0.6.6 code it looks like it creates a thrift client with an infinite
timeout. So it may be an internode timeout, which is set in storage-conf.xml.


On 13 Jan, 2011,at 07:40 AM, Jairam Chandar <> wrote:

Hi folks,

We have a Cassandra 0.6.6 cluster running in production. We want to run Hadoop (version 0.20.2)
jobs over this cluster in order to generate reports. 
I modified the word_count example in the contrib folder of the cassandra distribution. While
the program is running fine for small datasets (in the order of 100-200 MB) on small clusters
(2 machines), it starts to give errors while trying to run on a bigger cluster (5 machines)
with much larger dataset (400 GB). Here is the error that we get - 

java.lang.RuntimeException: TimedOutException()
	at org.apache.cassandra.hadoopColumnFamilyRecordReader$RowIterator.maybeInit(
	at org.apache.cassandra.hadoop.ColumnFamilyRecordReader$RowIterator.computeNext(
	at org.apache.cassandra.hadoop.ColumnFamilyRecordReader$RowIterator.computeNext(
	at org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReaderjava:98)
	at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(
	at org.apache.hadoop.mapreduce.MapContextnextKeyValue(
	at org.apache.hadoop.mapreduce.Mapperrun(
	at org.apache.hadoop.mapred.MapTask.runNewMapper(
	at org.apache.hadoop.mapred.Child.main(
Caused by: TimedOutException()
	at org.apache.cassandra.thrift.Cassandra$
	at org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(
	at org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(
	at org.apache.cassandra.hadoop.ColumnFamilyRecordReader$RowIterator.maybeInit(
	... 11 more

I came across this page on the Cassandra wiki -
and tried modifying the ulimit and changing batch sizes These did not help. Though the number
of successful map tasks increased, it eventually fails since the total number of map tasks
is huge. 

Any idea on what could be causing this? The program we are running is a very slight modification
of the word_count example with respect to reading from Cassandra. The only change being specific
keyspace, columnfamily and columns. The rest of the code for reading is the same as the word_count
example in the source code for Cassandra 0.6.6.

Thanks and regards,
Jairam Chandar
  • Unnamed multipart/alternative (inline, None, 0 bytes)
    • Unnamed multipart/related (inline, None, 0 bytes)
View raw message