cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christian Decker <decker.christ...@gmail.com>
Subject Re: Cassandra and Pig
Date Fri, 20 Aug 2010 13:57:58 GMT
Hm,
that was my conclusion too, but somehow I don't get what I'm doing wrong. I
checked that the thrift library is in CLASSPATH and the PIG_CLASSPATH and as
shown in the script above I'm using register to add the library to the
dependencies. Am I missing something else?

Regards,
Chris
--
Christian Decker
Software Architect
http://blog.snyke.net


On Wed, Aug 18, 2010 at 8:09 PM, Stu Hood <stu.hood@rackspace.com> wrote:

> Needing to manually copy the jars to all of the nodes would mean that you
> aren't applying the Pig 'register <jar>;' command properly.
>
> -----Original Message-----
> From: "Christian Decker" <decker.christian@gmail.com>
> Sent: Wednesday, August 18, 2010 7:08am
> To: user@cassandra.apache.org
> Subject: Re: Cassandra and Pig
>
> I got one step further by cheating a bit, I just took all the Cassandra
> Jars
> and dropped them into the Hadoop lib folder, so at least now I can run some
> pig scripts over the data in Cassandra, but this is far from optimal since
> it means I'd have to distribute my UDFs also to the Hadoop cluster, or did
> I
> miss something?
>
> Regards,
> Chris
> --
> Christian Decker
> Software Architect
> http://blog.snyke.net
>
>
> On Tue, Aug 17, 2010 at 4:04 PM, Christian Decker <
> decker.christian@gmail.com> wrote:
>
> > Ok, by now it's getting very strange. I deleted the entire installation
> and
> > restarted from scratch and now I'm getting a similar error even though
> I'm
> > going through the pig_cassandra script.
> >
> > 2010-08-17 15:54:10,049 [main] INFO
> >
>  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
> > - 0% complete
> > 2010-08-17 15:55:10,032 [Thread-10] INFO
> >  org.apache.cassandra.config.DatabaseDescriptor - Auto DiskAccessMode
> > determined to be standard
> > 2010-08-17 15:55:24,652 [main] INFO
> >
>  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
> > - HadoopJobId: job_201008111350_0020
> > 2010-08-17 15:55:24,652 [main] INFO
> >
>  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
> > - More information at:
> > http://hmaster:50030/jobdetails.jsp?jobid=job_201008111350_0020
> > 2010-08-17 15:56:05,690 [main] INFO
> >
>  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
> > - 33% complete
> > 2010-08-17 15:56:09,874 [main] INFO
> >
>  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
> > - 100% complete
> > 2010-08-17 15:56:09,874 [main] ERROR
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
> > - 1 map reduce job(s) failed!
> > 2010-08-17 15:56:10,261 [main] INFO
> >
>  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
> > - Failed!
> > 2010-08-17 15:56:10,351 [main] ERROR org.apache.pig.tools.grunt.Grunt -
> > ERROR 2997: Unable to recreate exception from backed error: Error:
> > java.lang.ClassNotFoundException: org.apache.thrift.TBase
> >
> >
> > which is a bit different from my original error, but on the backend I get
> a
> > classic ClassNotFoundException.
> >
> > Any ideas?
> > --
> > Christian Decker
> > Software Architect
> > http://blog.snyke.net
> >
>
>
>

Mime
View raw message