cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeremy Hanna <>
Subject Re: cassandra/hadoop BulkOutputFormat failures
Date Sat, 15 Sep 2012 03:34:13 GMT
A couple of guesses:
- are you mixing versions of Cassandra?  Streaming differences between versions might throw
this error.  That is, are you bulk loading with one version of Cassandra into a cluster that's
a different version?
- (shot in the dark) is your cluster overwhelmed for some reason?

If the temp dir hasn't been cleaned up yet, you are able to retry, fwiw.


On Sep 14, 2012, at 1:34 PM, Brian Jeltema <> wrote:

> I'm trying to do a bulk load from a Cassandra/Hadoop job using the BulkOutputFormat class.
> It appears that the reducers are generating the SSTables, but is failing to load them
into the cluster:
> 12/09/14 14:08:13 INFO mapred.JobClient: Task Id : attempt_201208201337_0184_r_000004_0,
Status : FAILED
> Too many hosts failed: [/, /, /, /,
/, /] 
>        at org.apache.cassandra.hadoop.BulkRecordWriter.close(
>        at org.apache.cassandra.hadoop.BulkRecordWriter.close(
>        at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(
>        at org.apache.hadoop.mapred.ReduceTask.runNewReducer(
>        at
>        at org.apache.hadoop.mapred.Child$ 
>        at Method)
>        at   
>        at
>        at org.apache.hadoop.mapred.Child.main(  
> A brief look at the BulkOutputFormat class shows that it depends on SSTableLoader. My
Hadoop cluster
> and my Cassandra cluster are co-located on the same set of machines. I haven't found
any stated restrictions,
> but does this technique only work if the Hadoop cluster is distinct from the Cassandra
cluster? Any suggestions
> on how to get past this problem?
> Thanks in advance.
> Brian

View raw message