Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4A727D4E3 for ; Mon, 5 Nov 2012 10:27:53 +0000 (UTC) Received: (qmail 76487 invoked by uid 500); 5 Nov 2012 10:27:48 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 76189 invoked by uid 500); 5 Nov 2012 10:27:48 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 76152 invoked by uid 99); 5 Nov 2012 10:27:46 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 05 Nov 2012 10:27:46 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of aseem.iiith@gmail.com designates 209.85.215.176 as permitted sender) Received: from [209.85.215.176] (HELO mail-ea0-f176.google.com) (209.85.215.176) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 05 Nov 2012 10:27:39 +0000 Received: by mail-ea0-f176.google.com with SMTP id n12so2541262eaa.35 for ; Mon, 05 Nov 2012 02:27:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=FABFivrxbm6ePCLYJ8Epj5Z/cXD5+ZcMjvk5mUDuTjg=; b=OH226wVtFLmmRUOhxlA4RV1drEj2rVCfNSG8040nb66B4PKGeNLhfFCyGMiHqlHfWS +8XLEq1aG+HVFnMnT/h3fq+spUL4dycdAWoagnmhO4NbMRVUcy3pnPtWf0Z5TeYBjODV F00Hx1ewiX4hg8W+UbG6+KdPt3MCy3f5EWuG5nmU+1vd0q1GC6FzWUYqr3U1s7//BYPW Wn4fOrc1CaA5gS+XwhaqVwEB6YaYrQfCo/7lWthyn9+WPRGSnizdBQaGYTWnTNIRusFe /BRsUNUr9ShnCktN78CdFHGVWAZmpxfpO+ldpvbDjhtTQLihNs3kNfnyXD4U3SPRFZZN n1Kg== MIME-Version: 1.0 Received: by 10.14.1.69 with SMTP id 45mr35272409eec.23.1352111239507; Mon, 05 Nov 2012 02:27:19 -0800 (PST) Received: by 10.14.134.11 with HTTP; Mon, 5 Nov 2012 02:27:19 -0800 (PST) In-Reply-To: References: Date: Mon, 5 Nov 2012 15:57:19 +0530 Message-ID: Subject: Re: Task does not enter reduce function after secondary sort From: Aseem Anand To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b62257020af2c04cdbcee44 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b62257020af2c04cdbcee44 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Hey, That change coupled with a few minor issues seemed to work. Though its strange that mapreduce programs I wrote with the same API were working till now without Override. Thanks :). Thanks, Aseem On Mon, Nov 5, 2012 at 3:33 AM, Harsh J wrote: > Yep - it will show an error since your reduce(=85) signature is wrong > for the new API: > > http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Red= ucer.html#reduce(KEYIN,%20java.lang.Iterable,%20org.apache.hadoop.mapreduce= .Reducer.Context) > > Chuck the Reporter object (its an Old API thing, now built into > Context itself) and transform it into: > > @Override > public void reduce(Text key, Iterable values, Context > output) { > =85 > } > > =85 and your IDE shouldn't complain anymore. > > On Mon, Nov 5, 2012 at 2:45 AM, Aseem Anand wrote= : > > Hey, > > Here are code snippets. > > > > In the driver class : > > job.setMapperClass(SkyzKnnMapperT.class); > > job.setReducerClass(SkyzKnnReducer.class); > > job.setGroupingComparatorClass(GroupComparator.class); > > job.setPartitionerClass(MyPartitioner.class); > > job.setSortComparatorClass(KeyComparator.class); > > job.setMapOutputKeyClass(Text.class); > > job.setMapOutputValueClass(NullWritable.class); > > job.setOutputKeyClass(Text.class); > > job.setOutputValueClass(Text.class); > > > > public class GroupComparator extends WritableComparator { > > > > protected GroupComparator() { > > super(Text.class, true); > > } > > > > @Override > > public int compare(WritableComparable w1, WritableComparable w2) { > > > > //consider only zone and day part of the key > > Text t1 =3D (Text) w1; > > Text t2 =3D (Text) w2; > > String[] t1Items =3D t1.toString().split(":"); > > String[] t2Items =3D t2.toString().split(":"); > > int comp =3D t1Items[0].compareTo(t2Items[0]); > > System.out.println("GROUP" + comp); > > return comp; > > > > } > > } > > public class SkyzKnnReducer extends Reducer { > > public void reduce(Text key, Iterable values, > > Context output, Reporter reporter) > > throws IOException, InterruptedException { > > String t =3D key.toString(); > > t =3D "HELLO" + t; > > output.write(new Text(t),new Text(t)); > > > > } > > } > > > > The composite key is of the form A:Rest_of_text where A is the natural > key. > > > > Override annotation to this reduce method shows an error in Eclipse. Wh= at > > else could be going wrong ? > > > > Thanks, > > Aseem > > On Mon, Nov 5, 2012 at 2:33 AM, Harsh J wrote: > >> > >> Sounds like an override issue to me. If you can share your code, we > >> can take a quick look - otherwise, try annotating your reduce(=85) > >> method with @Override and recompiling to see if it really is the right > >> signature Java expects. > >> > >> On Mon, Nov 5, 2012 at 1:48 AM, Aseem Anand > wrote: > >> > Hi, > >> > I am using a Secondary Sort for my Hadoop program. My map function > emits > >> > (Text,NullWritable) where Text contains the composite key and > >> > appropriate > >> > comparison functions are made and a custom Partitioner . These seem = to > >> > be > >> > working fine. > >> > > >> > I have been struggling with the problem that these values are not > being > >> > received by the reduce function and instead automatically get writte= n > to > >> > the > >> > hdfs in x number of files where x is the number of reducers. I have > made > >> > sure the reduce function is set to my Reduce function and not identi= ty > >> > reduce. > >> > > >> > Can someone please explain this behavior and what could be possibly > >> > wrong ? > >> > > >> > Thanks & Regards, > >> > Aseem > >> > >> > >> > >> -- > >> Harsh J > > > > > > > > -- > Harsh J > --047d7b62257020af2c04cdbcee44 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Hey,
That change coupled with a few minor issues seemed to work. Though= its strange that mapreduce programs I wrote with the same API were working= till now without Override. Thanks :).

Thanks,
Asee= m
On Mon, Nov 5, 2012 at
=A03:33 AM, Harsh J <harsh@cloudera.com> wrote:
<= blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px= #ccc solid;padding-left:1ex"> Yep - it will show an error since your reduce(=85) signature is wrong
for the new API:
http://hadoop.apache.org/docs= /current/api/org/apache/hadoop/mapreduce/Reducer.html#reduce(KEYIN,%20java.= lang.Iterable,%20org.apache.hadoop.mapreduce.Reducer.Context)

Chuck the Reporter object (its an Old API thing, now built into
Context itself) and transform it into:

@Override
public void reduce(Text key, Iterable<NullWritable> values, Context o= utput) {
=A0 =A0 =85
}

=85 and your IDE shouldn't complain anymore.

On Mon, Nov 5, 2012 at 2:45 AM, Aseem Anand <aseem.iiith@gmail.com> wrote:
> Hey,
> Here are code snippets.
>
> In the driver class :
> =A0 =A0 =A0 =A0 job.setMapperClass(SkyzKnnMapperT.class);
> =A0 =A0 =A0 =A0 job.setReducerClass(SkyzKnnReducer.class);
> =A0 =A0 =A0 =A0 job.setGroupingComparatorClass(GroupComparator.class);=
> =A0 =A0 =A0 =A0 job.setPartitionerClass(MyPartitioner.class);
> =A0 =A0 =A0 =A0 job.setSortComparatorClass(KeyComparator.class);
> =A0 =A0 =A0 =A0 job.setMapOutputKeyClass(Text.class);
> =A0 =A0 =A0 =A0 job.setMapOutputValueClass(NullWritable.class);
> =A0 =A0 =A0 =A0 job.setOutputKeyClass(Text.class);
> =A0 =A0 =A0 =A0 job.setOutputValueClass(Text.class);
>
> public class GroupComparator extends WritableComparator {
>
> protected GroupComparator() {
> super(Text.class, true);
> }
>
> @Override
> public int compare(WritableComparable w1, WritableComparable w2) {
>
> //consider only zone and day part of the key
> Text t1 =3D (Text) w1;
> Text t2 =3D (Text) w2;
> String[] t1Items =3D t1.toString().split(":");
> String[] t2Items =3D t2.toString().split(":");
> int comp =3D t1Items[0].compareTo(t2Items[0]);
> System.out.println("GROUP" + comp);
> return comp;
>
> }
> }
> public class SkyzKnnReducer extends Reducer<Text,Iterable,Text,Text= > {
> public void reduce(Text key, Iterable<NullWritable> values,
> Context output, Reporter reporter)
> throws IOException, InterruptedException {
> =A0 =A0 =A0 =A0 =A0 =A0 String t =3D key.toString();
> =A0 =A0 =A0 =A0 =A0 =A0 t =3D "HELLO" + t;
> =A0 =A0 =A0 =A0 =A0 =A0 output.write(new Text(t),new Text(t));
>
> }
> }
>
> The composite key is of the form A:Rest_of_text where A is the natural= key.
>
> Override annotation to this reduce method shows an error in Eclipse. W= hat
> else could be going wrong ?
>
> Thanks,
> Aseem
> On Mon, Nov 5, 2012 at 2:33 AM, Harsh J <harsh@cloudera.com> wrote:
>>
>> Sounds like an override issue to me. If you can share your code, w= e
>> can take a quick look - otherwise, try annotating your reduce(=85)=
>> method with @Override and recompiling to see if it really is the r= ight
>> signature Java expects.
>>
>> On Mon, Nov 5, 2012 at 1:48 AM, Aseem Anand <aseem.iiith@gmail.com> wrote:
>> > Hi,
>> > I am using a Secondary Sort for my Hadoop program. My map fun= ction emits
>> > (Text,NullWritable) where Text contains the composite key and=
>> > appropriate
>> > comparison functions are made and a custom Partitioner . Thes= e seem to
>> > be
>> > working fine.
>> >
>> > I have been struggling with the problem that these values are= not being
>> > received by the reduce function and instead automatically get= written to
>> > the
>> > hdfs in x number of files where x is the number of reducers. = I have made
>> > sure the reduce function is set to my Reduce function and not= identity
>> > reduce.
>> >
>> > Can someone please explain this behavior and what could be po= ssibly
>> > wrong ?
>> >
>> > Thanks & Regards,
>> > Aseem
>>
>>
>>
>> --
>> Harsh J
>
>



--
Harsh J

--047d7b62257020af2c04cdbcee44--