cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Moores <>
Subject Re: 0.7.0-beta2 and Hadoop
Date Thu, 14 Oct 2010 20:19:32 GMT
I SOLVED the problem.
It was my misunderstanding of how the cassandra connection is being used for calling getSlices().

On Oct 14, 2010, at 10:06 AM, Michael Moores wrote:

Ok I moved back to hadoop 20.2 and the WordCount example is doing better.
But I am still seeing a problem, that may be due to my lack of experience w/ hadoop.
I am running "hadoop jar..." on my JobTracker/NameNode machine, which is not running Cassandra.
I have DataNode/TaskTracker running on all cassandra nodes, with my ConfigHelper set up to
talk to cassandra on localhost.
When I run the job, I see it can't connect:  (I renamed the main class to "ProfileStats")

[hadoop@kv-app01 test]$ hadoop jar hadoop-cassandra-0.0.1-SNAPSHOT.jar com.real.uds.hadoop.ProfileStats
xyz -libjars ./cassandra-0.7.0-beta2.jar ./libthrift-r959516.jar
10/10/14 09:57:57 INFO hadoop.ProfileStats: main: adding jars...
10/10/14 09:57:58 INFO hadoop.ProfileStats: output reducer type: filesystem
10/10/14 09:57:58 INFO hadoop.ProfileStats: main: adding jars AGAIN...
Exception in thread "main" unable to connect to server
        at org.apache.cassandra.hadoop.ColumnFamilyInputFormat.createConnection(
Caused by: Connection refused
        at Method)

Should I expect my job to be executed on the TaskTracker nodes?

On Oct 13, 2010, at 5:39 PM, Michael Moores wrote:

What version of hadoop should i be using with cassandra 0.7.0-beta2?
I am using the latest version 21.0.

Just running a modified version of the WordCount example:

I get a linkage error thrown from the getSplits method.

Exception in thread "main" java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext,
but class was expected
        at org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(
        at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(
        at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(
        at org.apache.hadoop.mapreduce.Job.submit(
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(

View raw message