Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id CDE2BE912 for ; Tue, 20 Nov 2012 12:32:25 +0000 (UTC) Received: (qmail 578 invoked by uid 500); 20 Nov 2012 12:32:20 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 329 invoked by uid 500); 20 Nov 2012 12:32:19 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 312 invoked by uid 99); 20 Nov 2012 12:32:18 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 20 Nov 2012 12:32:18 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of balijamahesh.mca@gmail.com designates 209.85.216.48 as permitted sender) Received: from [209.85.216.48] (HELO mail-qa0-f48.google.com) (209.85.216.48) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 20 Nov 2012 12:32:13 +0000 Received: by mail-qa0-f48.google.com with SMTP id s11so844006qaa.14 for ; Tue, 20 Nov 2012 04:31:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=Oxed2npoNUgHFsA32Hld3D7S9javyzvcWbLMIx5r37g=; b=a/AIKneuNapvRyevir7TXyLFj4OpAvPlfKcdut2IYNxK0lnrXbje3UpkVlEHf74D3C FL34Yxf5EYiWmPb/sapJ1iyy6z//BBisEYZSUl4mxABG7ZQDGBjuXRO26B9N5alHx1PV 5Bm/o/WwkfeIuWMyKehBPGAT8e8IIRlXzXzu8QA4ckkY3kAVd8qnG/PcBM4ATKPYuXam K8d44Ju7GtcDMbTpawmTezUVfBrlONuJ3aBAh4ABUs3TgMNlpJAbYKraxAIorz1xvCV2 Eqce9MIqD1+PYU9HPeajAUjsasqpwG19C3oQdcpit+lnpd6SzdxAwRIv9H6mza5p/XLF o/JA== MIME-Version: 1.0 Received: by 10.224.215.66 with SMTP id hd2mr14798689qab.10.1353414713059; Tue, 20 Nov 2012 04:31:53 -0800 (PST) Received: by 10.49.60.69 with HTTP; Tue, 20 Nov 2012 04:31:52 -0800 (PST) In-Reply-To: References: <1353292543.91431.YahooMailNeo@web125404.mail.ne1.yahoo.com> Date: Tue, 20 Nov 2012 18:01:52 +0530 Message-ID: Subject: Re: java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to java.lang.String From: Mahesh Balija To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=20cf300fb19334aaf604ceec6bdc X-Virus-Checked: Checked by ClamAV on apache.org --20cf300fb19334aaf604ceec6bdc Content-Type: text/plain; charset=ISO-8859-1 Hi, To make it simple what Harsh said is, MapReduce framework only works on serializable (i.e., writable) objects. In case of keys it should be serializable and comparable. So you cannot use the basic datatypes like int, String, long etc rather you should use the similar types in Hadoop like IntWritable, Text, LongWritable etc. The above error you got because the sourcekey and sourcevalue are determined by the type of inputformat you use. I believe you are using TextInputFormat in your job so the key will be Object/LongWritable and value will be Text, so when the framework is trying to pass those LongWritable to your mapper it is throwing the classcast exception at runtime. Best, Mahesh Balija, Calsoft Labs. On Tue, Nov 20, 2012 at 12:41 AM, Harsh J wrote: > Hi, > > 1. Map/Reduce in 1.x. does not know how to efficiently and > automatically serialize regular Java types such as String, Long, etc.. > There is experimental (and I wouldn't recommend using it either) Java > serialization support for such types in 2.x releases if you enable the > JavaSerialization classes. Please use Writable-based native type > classes (such as Text for String, LongWritable for long, etc.). > > 2. The type checking is done based on configuration at runtime. What > you set as JobConf.setOutput[Key/Value]Class and > JobConf.setMapOutput[Key/Value]Class (or similar new API equivalents) > will be checked for passing instances of key and value objects during > runtime. By default, these are LongWritable (key) and Text (value) and > you see the former in your error. > > On Mon, Nov 19, 2012 at 8:05 AM, Utester Utester > wrote: > > Hi, > > > > I have Hadoop 1.0.3 running on Ubuntu Linux. I am playing around with a > > simple Map-Reduce job. > > > > Details of the code > > ============= > > 1. Since this exercise is mainly to learn basic Hadoop APIs and learn > how to > > run these jobs, the logic in the Map job is irrelevant. > > 2. The code snippet is below with just the APIs (which I believe is > causing > > a problem - PLEASE correct me if I am wrong and I will post the entire > code > > snippet) > > public > > class JobOrganizer > > > > .... > > .... > > public > > static class JobOrganizerMapper extends MapReduceBase implements > > Mapper { > > > > @Override > > public void map(String sourceKey, String sourceValue, > > OutputCollector outputC, > > Reporter reporter) throws IOException { > > ... > > ... > > } > > > > } > > > > Exception: > > > > hduser@utester-VirtualBox:/usr/local/hadoop/bin$ > > /usr/local/hadoop/bin/hadoop jar > > > ~/HadoopCodeProjectsFolder/JobOrganizerMapRedProject/11182012/HadoopMapRedProject.jar > > org.u.hadoopmapred.JobOrganizer > > Warning: $HADOOP_HOME is deprecated. > > 12/11/18 16:36:22 WARN mapred.JobClient: Use GenericOptionsParser for > > parsing the arguments. Applications should implement Tool for the same. > > 12/11/18 16:36:22 INFO util.NativeCodeLoader: Loaded the native-hadoop > > library > > 12/11/18 16:36:22 WARN snappy.LoadSnappy: Snappy native library not > loaded > > 12/11/18 16:36:22 INFO mapred.FileInputFormat: Total input paths to > process > > : 1 > > 12/11/18 16:36:22 INFO mapred.JobClient: Running job: > job_201211181608_0002 > > 12/11/18 16:36:23 INFO mapred.JobClient: map 0% reduce 0% > > 12/11/18 16:36:42 INFO mapred.JobClient: Task Id : > > attempt_201211181608_0002_m_000000_0, Status : FAILED > > java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be > > cast to java.lang.String > > at > > > org.u.hadoopmapred.JobOrganizer$JobOrganizerMapper.map(JobOrganizer.java:1) > > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) > > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436) > > > > > > The Mapper Interface as per the javadoc is: Interface Mapper > V2> and its map function is: map(K1 key, V1 value, OutputCollector V2> > > output, Reporter). I wanted to parameterize K1, V1, K2, V2 to all be > String. > > Is something wrong in the way I am thinking? Is this what is wrong? I > have > > found similar questions on the internet but the asnwers did not clarify > how > > I am breaking the Mapper contract (I did not have any compile errors - > just > > a runtime error). > > > > > > > > Thanks > > > > -- > Harsh J > --20cf300fb19334aaf604ceec6bdc Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi,

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 To make it simple what H= arsh said is,
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0
=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0 MapReduce framework only works on serializable (i.= e., writable) objects. In case of keys it should be serializable and compar= able.
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 So you cannot use the basic datatyp= es like int, String, long etc rather you should use the similar types in Ha= doop like IntWritable, Text, LongWritable etc.

=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0 The above error you got because the sourcekey and source= value are determined by the type of inputformat you use.
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 I believe you are using TextInputFo= rmat in your job so the key will be Object/LongWritable and value will be T= ext,
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 so when the framework is t= rying to pass those LongWritable to your mapper it is throwing the classcas= t exception at runtime.

Best,
Mahesh Balija,
Calsoft Labs.

On Tue, Nov 20, 2012 at 12:41 AM, Harsh J <harsh@cloudera.com>= wrote:
Hi,

1. Map/Reduce in 1.x. does not know how to efficiently and
automatically serialize regular Java types such as String, Long, etc..
There is experimental (and I wouldn't recommend using it either) Java serialization support for such types in 2.x releases if you enable the
JavaSerialization classes. Please use Writable-based native type
classes (such as Text for String, LongWritable for long, etc.).

2. The type checking is done based on configuration at runtime. What
you set as JobConf.setOutput[Key/Value]Class and
JobConf.setMapOutput[Key/Value]Class (or similar new API equivalents)
will be checked for passing instances of key and value objects during
runtime. By default, these are LongWritable (key) and Text (value) and
you see the former in your error.

On Mon, Nov 19, 2012 at 8:05 AM, Utester Utester <utesterp@yahoo.com> wrote:
> Hi,
>
> I have Hadoop 1.0.3 running on Ubuntu Linux. I am playing around with = a
> simple Map-Reduce job.
>
> Details of the code
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> 1. Since this exercise is mainly to learn basic Hadoop APIs and learn = how to
> run these jobs, the logic in the Map job is irrelevant.
> 2. The code snippet is below with just the APIs (which I believe is ca= using
> a problem - PLEASE correct me if I am wrong and I will post the entire= code
> snippet)
> public
> class JobOrganizer
>
> ....
> ....
> public
> static class JobOrganizerMapper extends MapReduceBase implements
> Mapper<String, String, String, String> {
>
> @Override
> public void map(String sourceKey, String sourceValue,
> OutputCollector<String, String> outputC,
> Reporter reporter) throws IOException {
> ...
> ...
> }
>
> }
>
> Exception:
>
> hduser@utester-VirtualBox:/usr/local/hadoop/bin$
> /usr/local/hadoop/bin/hadoop jar
> ~/HadoopCodeProjectsFolder/JobOrganizerMapRedProject/11182012/HadoopMa= pRedProject.jar
> org.u.hadoopmapred.JobOrganizer
> Warning: $HADOOP_HOME is deprecated.
> 12/11/18 16:36:22 WARN mapred.JobClient: Use GenericOptionsParser for<= br> > parsing the arguments. Applications should implement Tool for the same= .
> 12/11/18 16:36:22 INFO util.NativeCodeLoader: Loaded the native-hadoop=
> library
> 12/11/18 16:36:22 WARN snappy.LoadSnappy: Snappy native library not lo= aded
> 12/11/18 16:36:22 INFO mapred.FileInputFormat: Total input paths to pr= ocess
> : 1
> 12/11/18 16:36:22 INFO mapred.JobClient: Running job: job_201211181608= _0002
> 12/11/18 16:36:23 INFO mapred.JobClient: =A0map 0% reduce 0%
> 12/11/18 16:36:42 INFO mapred.JobClient: Task Id :
> attempt_201211181608_0002_m_000000_0, Status : FAILED
> java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot= be
> cast to java.lang.String
> =A0at
> org.u.hadoopmapred.JobOrganizer$JobOrganizerMapper.map(JobOrganizer.ja= va:1)
> =A0at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
> =A0at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)<= br> >
>
> The Mapper Interface as per the javadoc is: Interface Mapper <K1, V= 1, K2,
> V2> and its map function is: map(K1 key, V1 value, OutputCollector&= lt;K2, V2>
> output, Reporter). I wanted to parameterize K1, V1, K2, V2 to all be S= tring.
> Is something wrong in the way I am thinking? Is this what is wrong? I = have
> found similar questions on the internet but the asnwers did not clarif= y how
> I am breaking the Mapper contract (I did not have any compile errors -= just
> a runtime error).
>
>
>
> Thanks



--
Harsh J

--20cf300fb19334aaf604ceec6bdc--