Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E9446DEE7 for ; Thu, 29 Nov 2012 21:04:15 +0000 (UTC) Received: (qmail 50989 invoked by uid 500); 29 Nov 2012 21:04:10 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 50909 invoked by uid 500); 29 Nov 2012 21:04:10 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 50865 invoked by uid 99); 29 Nov 2012 21:04:09 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 29 Nov 2012 21:04:09 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=FREEMAIL_REPLY,HTML_MESSAGE,MIME_QP_LONG_LINE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of mohitanchlia@gmail.com designates 209.85.210.48 as permitted sender) Received: from [209.85.210.48] (HELO mail-da0-f48.google.com) (209.85.210.48) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 29 Nov 2012 21:04:02 +0000 Received: by mail-da0-f48.google.com with SMTP id k18so4782271dae.35 for ; Thu, 29 Nov 2012 13:03:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=references:mime-version:in-reply-to:content-type :content-transfer-encoding:message-id:cc:x-mailer:from:subject:date :to; bh=84pqO0m9so+gRjyYLM6qIlKBgjAvpRaUyxMhhexfHj8=; b=GNyn3mxBSytkOZKifQcBg4nDG5mQrpg26IU+Us4Rf6KHSAqwZLZBusPcYIdHIxaiit eKZBxh6kZaZwMHRiEEY2zqYgAMuMcbFTeTSgcLAiurnZAO22R5EDX+XAmXfG7r5teWEq oiIoZ/kHIatuUbKhRg3Oxj5krlhHbw7rW+XXyI97nAPtqSu+gpcVQxFpFPDPZhEv0jYl Clo6CQ8EJ6fKr7qMUBOksJB1t76KrF8lzsSZaSEGZTT/c+V5EmZ7ZxuNR9fj/xg3+otp CikpxBWVAkpIQ1FEM7pfFPArCqtNRMuJrGn0YKMRyfv+Hj32hhzknflSitefIy0gkqBi 6aTg== Received: by 10.68.248.74 with SMTP id yk10mr71845969pbc.86.1354223020590; Thu, 29 Nov 2012 13:03:40 -0800 (PST) Received: from [10.27.101.211] (mobile-166-137-178-094.mycingular.net. [166.137.178.94]) by mx.google.com with ESMTPS id vi9sm1764179pbc.41.2012.11.29.13.03.39 (version=SSLv3 cipher=OTHER); Thu, 29 Nov 2012 13:03:39 -0800 (PST) References: Mime-Version: 1.0 (1.0) In-Reply-To: Content-Type: multipart/alternative; boundary=Apple-Mail-ADAD6289-BADC-4440-9C52-EAC9D9B7F0F5 Content-Transfer-Encoding: 7bit Message-Id: Cc: "user@hadoop.apache.org" X-Mailer: iPhone Mail (10A525) From: Mohit Anchlia Subject: Re: Trouble with Word Count example Date: Thu, 29 Nov 2012 13:03:35 -0800 To: "user@hadoop.apache.org" X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail-ADAD6289-BADC-4440-9C52-EAC9D9B7F0F5 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Try to give full path or ./ Sent from my iPhone On Nov 29, 2012, at 1:00 PM, "Kartashov, Andy" wrot= e: > Maybe you stepped out of your working directory. =E2=80=9C$ ls =E2=80=93l=E2= =80=9D Do you see your .jar? > =20 > From: Sandeep Jangra [mailto:sandeepjangra@gmail.com]=20 > Sent: Thursday, November 29, 2012 3:46 PM > To: user@hadoop.apache.org > Subject: Re: Trouble with Word Count example > =20 > Hi Harsh, > =20 > I tried putting the generic option first, but it throws exception file n= ot found. > The jar is in current directory. Then I tried giving absolute path of th= is jar, but that also brought no luck. > =20 > sudo -u hdfs hadoop jar word_cnt.jar WordCount2 -libjars=3Dword_cnt.jar= /tmp/root/input /tmp/root/output17=20 > Exception in thread "main" java.io.FileNotFoundException: File word_cnt.ja= r does not exist. > at org.apache.hadoop.util.GenericOptionsParser.validateFiles(G= enericOptionsParser.java:384) > at org.apache.hadoop.util.GenericOptionsParser.processGeneralO= ptions(GenericOptionsParser.java:280) > at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOpt= ions(GenericOptionsParser.java:418) > at org.apache.hadoop.util.GenericOptionsParser.(GenericO= ptionsParser.java:168) > at org.apache.hadoop.util.GenericOptionsParser.(GenericO= ptionsParser.java:151) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64) > =20 > Also, I have been deleting my jars and the class directory before each n= ew try. So even I am suspicious why do I see this: > "12/11/29 10:20:59 WARN mapred.JobClient: No job jar file set. User class= es may not be found. See JobConf(Class) or JobConf#setJar(String)." > =20 > Could it be that my hadoop is running on old jar files (the one with pac= kage name "mapred" (not mapreduce)) > But my program is using new jars as well. > =20 > I can try going back to old word count example on the apache site and us= ing old jars. > =20 > Any other pointers would be highly appreciated. Thanks > =20 > =20 >=20 > On Thu, Nov 29, 2012 at 2:42 PM, Harsh J wrote: > I think you may have not recompiled your application properly. >=20 > Your runtime shows this: >=20 > 12/11/29 10:20:59 WARN mapred.JobClient: No job jar file set. User > classes may not be found. See JobConf(Class) or > JobConf#setJar(String). >=20 > Which should not appear, cause your code has this (which I suspect you > may have added later, accidentally?): >=20 > job.setJarByClass(WordCount2.class); >=20 > So if you can try deleting the older jar and recompiling it, the > problem would go away. >=20 > Also, when passing generic options such as -libjars, etc., they need > to go first in order. I mean, it should always be [Classname] [Generic > Options] [Application Options]. Otherwise, they may not get utilized > properly. >=20 > On Fri, Nov 30, 2012 at 12:51 AM, Sandeep Jangra > wrote: > > Yups I can see my class files there. > > > > > > On Thu, Nov 29, 2012 at 2:13 PM, Kartashov, Andy > > wrote: > >> > >> Can you try running jar =E2=80=93tvf word_cnt.jar and see if your stati= c nested > >> classes WordCount2$Map.class and WordCount2$Reduce.class have actually b= een > >> added to the jar. > >> > >> > >> > >> Rgds, > >> > >> AK47 > >> > >> > >> > >> > >> > >> From: Sandeep Jangra [mailto:sandeepjangra@gmail.com] > >> Sent: Thursday, November 29, 2012 1:36 PM > >> To: user@hadoop.apache.org > >> Subject: Re: Trouble with Word Count example > >> > >> > >> > >> Also, I did set the HADOOP_CLASSPATH variable to point to the word_cnt.= jar > >> only. > >> > >> > >> > >> On Thu, Nov 29, 2012 at 10:54 AM, Sandeep Jangra > >> wrote: > >> > >> Thanks for the quick response Mahesh. > >> > >> > >> > >> I am using the following command: > >> > >> > >> > >> sudo -u hdfs hadoop jar word_cnt.jar WordCount2 /tmp/root/input > >> /tmp/root/output15 -libjars=3Dword_cnt.jar > >> > >> (The input directory exists on the hdfs) > >> > >> > >> > >> This is how I compiled and packaged it: > >> > >> > >> > >> javac -classpath > >> /usr/lib/hadoop-0.20-mapreduce/hadoop-core.jar:/usr/lib/hadoop/* -d > >> word_cnt WordCount2.java > >> > >> jar -cvf word_cnt.jar -C word_cnt/ . > >> > >> > >> > >> > >> > >> > >> > >> On Thu, Nov 29, 2012 at 10:46 AM, Mahesh Balija > >> wrote: > >> > >> Hi Sandeep, > >> > >> > >> > >> For me everything seems to be alright. > >> > >> Can you tell us how are you running this job? > >> > >> > >> > >> Best, > >> > >> Mahesh.B. > >> > >> Calsoft Labs. > >> > >> On Thu, Nov 29, 2012 at 9:01 PM, Sandeep Jangra > >> wrote: > >> > >> Hello everyone, > >> > >> > >> > >> Like most others I am also running into some problems while running m= y > >> word count example. > >> > >> I tried the various suggestion available on internet, but I guess it;= s > >> time to go on email :) > >> > >> > >> > >> Here is the error that I am getting: > >> > >> 12/11/29 10:20:59 WARN mapred.JobClient: Use GenericOptionsParser for= > >> parsing the arguments. Applications should implement Tool for the same.= > >> > >> 12/11/29 10:20:59 WARN mapred.JobClient: No job jar file set. User > >> classes may not be found. See JobConf(Class) or JobConf#setJar(String).= > >> > >> 12/11/29 10:20:59 INFO input.FileInputFormat: Total input paths to proc= ess > >> : 1 > >> > >> 12/11/29 10:20:59 INFO util.NativeCodeLoader: Loaded the native-hadoop > >> library > >> > >> 12/11/29 10:20:59 WARN snappy.LoadSnappy: Snappy native library is > >> available > >> > >> 12/11/29 10:20:59 INFO snappy.LoadSnappy: Snappy native library loaded > >> > >> 12/11/29 10:21:00 INFO mapred.JobClient: Running job: > >> job_201210310210_0040 > >> > >> 12/11/29 10:21:01 INFO mapred.JobClient: map 0% reduce 0% > >> > >> 12/11/29 10:21:07 INFO mapred.JobClient: Task Id : > >> attempt_201210310210_0040_m_000000_0, Status : FAILED > >> > >> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class > >> WordCount2$Map not found > >> > >> at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:143= 9) > >> > >> at > >> org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobConte= xtImpl.java:191) > >> > >> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:605) > >> > >> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325) > >> > >> at org.apache.hadoop.mapred.Child$4.run(Child.java:270) > >> > >> at java.security.AccessController.doPrivileged(Native Method) > >> > >> at javax.security.auth.Subject.doAs(Subject.java:416) > >> > >> at > >> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformati= on.java:1232) > >> > >> at org.apache.hadoop.mapred.Child.main(Child.java:264) > >> > >> Caused by: java.lang.ClassNotFoundException: Class WordCount2$Map not > >> found > >> > >> at > >> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:= 1350) > >> > >> at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:143= 7) > >> > >> ... 8 more > >> > >> > >> > >> And here is the source code: > >> > >> > >> > >> > >> > >> import org.apache.hadoop.conf.Configuration; > >> > >> import org.apache.hadoop.conf.Configured; > >> > >> import org.apache.hadoop.fs.Path; > >> > >> import org.apache.hadoop.io.IntWritable; > >> > >> import org.apache.hadoop.io.LongWritable; > >> > >> import org.apache.hadoop.io.Text; > >> > >> import org.apache.hadoop.mapreduce.Job; > >> > >> import org.apache.hadoop.mapreduce.Mapper; > >> > >> import org.apache.hadoop.mapreduce.Reducer; > >> > >> import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; > >> > >> import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; > >> > >> import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; > >> > >> import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; > >> > >> import org.apache.hadoop.util.Tool; > >> > >> import org.apache.hadoop.util.ToolRunner; > >> > >> > >> > >> import java.io.IOException; > >> > >> import java.util.StringTokenizer; > >> > >> > >> > >> public class WordCount2 extends Configured implements Tool { > >> > >> > >> > >> > >> > >> public static class Map extends Mapper >> IntWritable> { > >> > >> private final static IntWritable one =3D new IntWritable(1); > >> > >> private Text word =3D new Text(); > >> > >> > >> > >> @Override > >> > >> protected void map(LongWritable key, Text value, Context contex= t) > >> throws IOException, InterruptedException { > >> > >> String line =3D value.toString(); > >> > >> StringTokenizer tokenizer =3D new StringTokenizer(line); > >> > >> while (tokenizer.hasMoreTokens()) { > >> > >> word.set(tokenizer.nextToken()); > >> > >> context.write(word, one); > >> > >> } > >> > >> } > >> > >> > >> > >> } > >> > >> > >> > >> > >> > >> public static class Reduce extends Reducer >> IntWritable> { > >> > >> > >> > >> @Override > >> > >> protected void reduce(Text key, Iterable values, > >> Context context) throws IOException, InterruptedException { > >> > >> > >> > >> int sum =3D 0; > >> > >> > >> > >> &n --Apple-Mail-ADAD6289-BADC-4440-9C52-EAC9D9B7F0F5 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable
Try to give full path or ./

Sen= t from my iPhone

On Nov 29, 2012, at 1:00 PM, "Kartashov, Andy= " <Andy.Kartashov@mpac.ca&g= t; wrote:

Maybe you stepped out of y= our working directory. =E2=80=9C$ ls =E2=80=93l=E2=80=9D  Do you see yo= ur .jar?

 

From: Sandeep Jangra [mailto:sandeepjangra@gmail.com]
Sent: Thursday, November 29, 2012 3:46 PM
To: user@hadoop.apache.org<= /a>
Subject: Re: Trouble with Word Count example

 

Hi Harsh,

 

  I tried putting the generic option first, but i= t throws exception file not found.

  The jar is in current directory. Then I tried g= iving absolute path of this jar, but that also brought no luck.

 

  sudo -u hdfs hadoop jar word_cnt.jar Word= Count2  -libjars=3Dword_cnt.jar /tmp/root/input /tmp/root/output17 = ;

Exception in thread "main" java.io.FileNotFoundExcept= ion: File word_cnt.jar does not exist.

   &nbs= p;        at org.apache.hadoop.uti= l.GenericOptionsParser.validateFiles(GenericOptionsParser.java:384)

   &nbs= p;        at org.apache.hadoop.uti= l.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:280)<= /p>

   &nbs= p;        at org.apache.hadoop.uti= l.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:418)

   &nbs= p;        at org.apache.hadoop.uti= l.GenericOptionsParser.<init>(GenericOptionsParser.java:168)

   &nbs= p;        at org.apache.hadoop.uti= l.GenericOptionsParser.<init>(GenericOptionsParser.java:151)

   &nbs= p;        at org.apache.hadoop.uti= l.ToolRunner.run(ToolRunner.java:64)

 

  Also, I have been deleting my jars and the cla= ss directory before each new try. So even I am suspicious why do I see this:=

"12/11/29 10:20:59 WARN mapred.JobClient: No job jar f= ile set.  User classes may not be found. See JobConf(Class) or&nbs= p;JobConf#setJar(String)."

 

  Could it be that my hadoop is running on old j= ar files (the one with package name "mapred" (not mapreduce))

  But my program is using new jars as well.

 

  I can try going back to old word count example= on the apache site and using old jars.

 

  Any other pointers would be highly appreciated= . Thanks

 

  

On Thu, Nov 29, 2012 at 2:42 PM, Harsh J <harsh@cloudera.com> wro= te:

I think you may have not recompiled your application p= roperly.

Your runtime shows this:


12/11/29 10:20:59 WARN mapred.JobClient: No job jar file set.  User
= classes may not be found. See JobConf(Class) or
JobConf#setJar(String).

Which should not appear, cause your code has this (wh= ich I suspect you
may have added later, accidentally?):

job.setJarByClass(WordCount2.class);

So if you can try deleting the older jar and recompiling it, the
problem would go away.

Also, when passing generic options such as -libjars, etc., they need
to go first in order. I mean, it should always be [Classname] [Generic
Options] [Application Options]. Otherwise, they may not get utilized
properly.

On Fri, Nov 30, 2012 at 12:51 AM, Sandeep Jangra

<sandeepjangra@gmail.com> wrote:
> Yups I can see my class files there.
>
>
> On Thu, Nov 29, 2012 at 2:13 PM, Kartashov, Andy <Andy.Kartashov@mpac.ca>
> wrote:
>>
>> Can you try running jar =E2=80=93tvf word_cnt.jar and see if your s= tatic nested
>> classes WordCount2$Map.class and WordCount2$Reduce.class have actua= lly been
>> added to the jar.
>>
>>
>>
>> Rgds,
>>
>> AK47
>>
>>
>>
>>
>>
>> From: Sandeep Jangra [mailto:sandeepjangra@gmail.com]
>> Sent: Thursday, November 29, 2012 1:36 PM
>> To: user@hadoop.apache.or= g
>> Subject: Re: Trouble with Word Count example
>>
>>
>>
>> Also, I did set the HADOOP_CLASSPATH variable to point to the word_= cnt.jar
>> only.
>>
>>
>>
>> On Thu, Nov 29, 2012 at 10:54 AM, Sandeep Jangra <sandeepjangra@gmail.com>
>> wrote:
>>
>> Thanks for the quick response Mahesh.
>>
>>
>>
>> I am using the following command:
>>
>>
>>
>> sudo -u hdfs hadoop jar word_cnt.jar WordCount2  /tmp/root/inp= ut
>> /tmp/root/output15  -libjars=3Dword_cnt.jar
>>
>> (The input directory exists on the hdfs)
>>
>>
>>
>> This is how I compiled and packaged it:
>>
>>
>>
>> javac -classpath
>> /usr/lib/hadoop-0.20-mapreduce/hadoop-core.jar:/usr/lib/hadoop/* &n= bsp;-d
>> word_cnt WordCount2.java
>>
>> jar -cvf word_cnt.jar -C word_cnt/ .
>>
>>
>>
>>
>>
>>
>>
>> On Thu, Nov 29, 2012 at 10:46 AM, Mahesh Balija
>> <balijamahesh.mca@= gmail.com> wrote:
>>
>> Hi Sandeep,
>>
>>
>>
>>            For me everything seems to= be alright.
>>
>>            Can you tell us how are yo= u running this job?
>>
>>
>>
>> Best,
>>
>> Mahesh.B.
>>
>> Calsoft Labs.
>>
>> On Thu, Nov 29, 2012 at 9:01 PM, Sandeep Jangra <sandeepjangra@gmail.com>
>> wrote:
>>
>> Hello everyone,
>>
>>
>>
>>   Like most others I am also running into some problems while r= unning my
>> word count example.
>>
>>   I tried the various suggestion available on internet, but I g= uess it;s
>> time to go on email :)
>>
>>
>>
>>   Here is the error that I am getting:
>>
>>   12/11/29 10:20:59 WARN mapred.JobClient: Use GenericOptionsP= arser for
>> parsing the arguments. Applications should implement Tool for the s= ame.
>>
>> 12/11/29 10:20:59 WARN mapred.JobClient: No job jar file set.  = ;User
>> classes may not be found. See JobConf(Class) or JobConf#setJar(Stri= ng).
>>
>> 12/11/29 10:20:59 INFO input.FileInputFormat: Total input paths to p= rocess
>> : 1
>>
>> 12/11/29 10:20:59 INFO util.NativeCodeLoader: Loaded the native-had= oop
>> library
>>
>> 12/11/29 10:20:59 WARN snappy.LoadSnappy: Snappy native library is<= br> >> available
>>
>> 12/11/29 10:20:59 INFO snappy.LoadSnappy: Snappy native library loa= ded
>>
>> 12/11/29 10:21:00 INFO mapred.JobClient: Running job:
>> job_201210310210_0040
>>
>> 12/11/29 10:21:01 INFO mapred.JobClient:  map 0% reduce 0%
= >>
>> 12/11/29 10:21:07 INFO mapred.JobClient: Task Id :
>> attempt_201210310210_0040_m_000000_0, Status : FAILED
>>
>> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class=
>> WordCount2$Map not found
>>
>> at org.apache.hadoop.conf.Configuration.getClass(Configuration.java= :1439)
>>
>> at
>> org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobC= ontextImpl.java:191)
>>
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:605)<= br> >>
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
>>
>> at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>>
>> at java.security.AccessController.doPrivileged(Native Method)
>>
>> at javax.security.auth.Subject.doAs(Subject.java:416)
>>
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInfor= mation.java:1232)
>>
>> at org.apache.hadoop.mapred.Child.main(Child.java:264)
>>
>> Caused by: java.lang.ClassNotFoundException: Class WordCount2$Map n= ot
>> found
>>
>> at
>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.j= ava:1350)
>>
>> at org.apache.hadoop.conf.Configuration.getClass(Configuration.java= :1437)
>>
>> ... 8 more
>>
>>
>>
>> And here is the source code:
>>
>>
>>
>>
>>
>> import org.apache.hadoop.conf.Configuration;
>>
>> import org.apache.hadoop.conf.Configured;
>>
>> import org.apache.hadoop.fs.Path;
>>
>> import org.apache.hadoop.io.IntWritable;
>>
>> import org.apache.hadoop.io.LongWritable;
>>
>> import org.apache.hadoop.io.Text;
>>
>> import org.apache.hadoop.mapreduce.Job;
>>
>> import org.apache.hadoop.mapreduce.Mapper;
>>
>> import org.apache.hadoop.mapreduce.Reducer;
>>
>> import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
>>
>> import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
>>
>> import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
= >>
>> import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
= >>
>> import org.apache.hadoop.util.Tool;
>>
>> import org.apache.hadoop.util.ToolRunner;
>>
>>
>>
>> import java.io.IOException;
>>
>> import java.util.StringTokenizer;
>>
>>
>>
>> public class WordCount2 extends Configured implements Tool {
>>
>>
>>
>>
>>
>>     public static class Map extends Mapper<LongWritabl= e, Text, Text,
>> IntWritable> {
>>
>>         private final static IntWritable one =3D= new IntWritable(1);
>>
>>         private Text word =3D new Text();
>>
>>
>>
>>         @Override
>>
>>         protected void map(LongWritable key, Te= xt value, Context context)
>> throws IOException, InterruptedException {
>>
>>             String line =3D value.toS= tring();
>>
>>             StringTokenizer tokenizer= =3D new StringTokenizer(line);
>>
>>             while (tokenizer.hasMoreT= okens()) {
>>
>>                 word.set(to= kenizer.nextToken());
>>
>>                 context.wri= te(word, one);
>>
>>             }
>>
>>         }
>>
>>
>>
>>     }
>>
>>
>>
>>
>>
>>     public static class Reduce extends Reducer<Text, I= ntWritable, Text,
>> IntWritable> {
>>
>>
>>
>>         @Override
>>
>>         protected void reduce(Text key, Iterabl= e<IntWritable> values,
>> Context context) throws IOException, InterruptedException {
>>
>>
>>
>>             int sum =3D 0;
>>
>>
>>
>> &n

<= /html>= --Apple-Mail-ADAD6289-BADC-4440-9C52-EAC9D9B7F0F5--