Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8F3BFD966 for ; Wed, 12 Dec 2012 12:44:34 +0000 (UTC) Received: (qmail 81073 invoked by uid 500); 12 Dec 2012 12:44:33 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 81042 invoked by uid 500); 12 Dec 2012 12:44:33 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 81017 invoked by uid 99); 12 Dec 2012 12:44:32 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 12 Dec 2012 12:44:32 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of imen.megdiche@gmail.com designates 209.85.220.176 as permitted sender) Received: from [209.85.220.176] (HELO mail-vc0-f176.google.com) (209.85.220.176) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 12 Dec 2012 12:44:25 +0000 Received: by mail-vc0-f176.google.com with SMTP id fo13so685882vcb.35 for ; Wed, 12 Dec 2012 04:44:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=tv6aY2G1XoABrzKsqq2EdcIaRWMZVEEWfCsKQDsHQqU=; b=MINRCSk1LGwmjKD2jSA88Ju+wa1ivvg+UHIjDPWfnS+RLAxZ+EfyXI4ZK87tG4cuQK 4mZJaqWCWQRud6usWpb1baca43c43F2E/0F7t4DhkW1dCbazuBL4kuS/a58wKKmeEPtc yi7rYIdU1hoGSfl1NS2wpIuUR9Mu32TuP4/m82Vw8zWtDchTfmx60alGOkIOPs/pAzMu d6HLAEmaGYpYEopV5sUe5km1Vjs+xUl1XYtb1pH0R6XdZWY8bjAUaF8VA9srKtkVyv1I Y7I5D3KMKS7zUApah76ZZCa1kxSyp7URYoeLWzcT/wxWaro+BRoh4ln07UhnLSohHJ12 MkWg== MIME-Version: 1.0 Received: by 10.59.6.70 with SMTP id cs6mr428752ved.60.1355316244044; Wed, 12 Dec 2012 04:44:04 -0800 (PST) Received: by 10.220.39.1 with HTTP; Wed, 12 Dec 2012 04:44:03 -0800 (PST) In-Reply-To: References: Date: Wed, 12 Dec 2012 13:44:03 +0100 Message-ID: Subject: Re: Modify the number of map tasks From: imen Megdiche To: user@hive.apache.org Content-Type: multipart/alternative; boundary=047d7bf0effe48dc5704d0a7272e X-Virus-Checked: Checked by ClamAV on apache.org --047d7bf0effe48dc5704d0a7272e Content-Type: text/plain; charset=ISO-8859-1 My goal is to analyze the response time of MapReduce depending on the size of the input files. I need to change the number of map and / or Reduce tasks and recover the execution time. S it turns out that nothing works locally on my pc : neither hadoop job-status command job_local_0001 (which return no job found ) nor localhost: 50030 I will be very grateful if you can help m better understand these problem 2012/12/12 Mohammad Tariq > Are you working locally?What exactly is the issue? > > Regards, > Mohammad Tariq > > > > On Wed, Dec 12, 2012 at 6:00 PM, imen Megdiche wrote: > >> no >> >> >> 2012/12/12 Mohammad Tariq >> >>> Any luck with "localhost:50030"?? >>> >>> Regards, >>> Mohammad Tariq >>> >>> >>> >>> On Wed, Dec 12, 2012 at 5:53 PM, imen Megdiche wrote: >>> >>>> i run the job through the command line >>>> >>>> >>>> 2012/12/12 Mohammad Tariq >>>> >>>>> You have to replace "JobTrackerHost" in "JobTrackerHost:50030" with >>>>> the actual name of the machine where JobTracker is running. For >>>>> example, If you are working on a local cluster, you have to use >>>>> "localhost:50030". >>>>> >>>>> Are you running your job through the command line or some IDE? >>>>> >>>>> Regards, >>>>> Mohammad Tariq >>>>> >>>>> >>>>> >>>>> On Wed, Dec 12, 2012 at 5:42 PM, imen Megdiche < >>>>> imen.megdiche@gmail.com> wrote: >>>>> >>>>>> excuse me the data size is 98 MB >>>>>> >>>>>> >>>>>> 2012/12/12 imen Megdiche >>>>>> >>>>>>> the size of data 49 MB and n of map 4 >>>>>>> the web UI JobTrackerHost:50030 does not wok, what should i do to >>>>>>> make this appear , i work on ubuntu >>>>>>> >>>>>>> >>>>>>> 2012/12/12 Mohammad Tariq >>>>>>> >>>>>>>> Hi Imen, >>>>>>>> >>>>>>>> You can visit the MR web UI at "JobTrackerHost:50030" and see >>>>>>>> all the useful information like no. of mappers, no of reducers, time taken >>>>>>>> for the execution etc. >>>>>>>> >>>>>>>> One quick question for you, what is the size of your data and what >>>>>>>> is the no of maps which you are getting right now? >>>>>>>> >>>>>>>> Regards, >>>>>>>> Mohammad Tariq >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 12, 2012 at 5:11 PM, imen Megdiche < >>>>>>>> imen.megdiche@gmail.com> wrote: >>>>>>>> >>>>>>>>> Thank you Mohammad but the number of map tasks still the same in >>>>>>>>> the execution. Do you know how to capture the time spent on execution. >>>>>>>>> >>>>>>>>> >>>>>>>>> 2012/12/12 Mohammad Tariq >>>>>>>>> >>>>>>>>>> Hi Imen, >>>>>>>>>> >>>>>>>>>> You can add "mapred.map.tasks" property in your >>>>>>>>>> mapred-site.xml file. >>>>>>>>>> >>>>>>>>>> But, it is just a hint for the InputFormat. Actually no. of maps >>>>>>>>>> is actually determined by the no of InputSplits created by the InputFormat. >>>>>>>>>> >>>>>>>>>> HTH >>>>>>>>>> >>>>>>>>>> Regards, >>>>>>>>>> Mohammad Tariq >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 12, 2012 at 4:11 PM, imen Megdiche < >>>>>>>>>> imen.megdiche@gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> I try to force the number of map for the mapreduce job with the >>>>>>>>>>> command : >>>>>>>>>>> public static void main(String[] args) throws Exception { >>>>>>>>>>> >>>>>>>>>>> JobConf conf = new JobConf(WordCount.class); >>>>>>>>>>> conf.set("mapred.job.tracker", "local"); >>>>>>>>>>> conf.set("fs.default.name", "local"); >>>>>>>>>>> conf.setJobName("wordcount"); >>>>>>>>>>> >>>>>>>>>>> conf.setOutputKeyClass(Text.class); >>>>>>>>>>> conf.setOutputValueClass(IntWritable.class); >>>>>>>>>>> >>>>>>>>>>> conf.setNumMapTask(6); >>>>>>>>>>> conf.setMapperClass(Map.class); >>>>>>>>>>> conf.setCombinerClass(Reduce.class); >>>>>>>>>>> conf.setReducerClass(Reduce.class); >>>>>>>>>>> ... >>>>>>>>>>> } >>>>>>>>>>> >>>>>>>>>>> But it doesn t work. >>>>>>>>>>> What can i do to modify the number of map and reduce tasks. >>>>>>>>>>> >>>>>>>>>>> Thank you >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > --047d7bf0effe48dc5704d0a7272e Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable My goal is = to analyze the response tim= e of MapReduce depending on= the size of the input files. I need to change the number of = map and / or Reduce tasks and recover= the execution time. S it turns out that not= hing works locally on my pc= :
neither hadoop job-status command job_local_0001 (which return no job found )nor localhost: 50030=
I will be very grateful if you can
help m better understand these problem


2012/12/12 Mohammad Tariq <dontariq@gm= ail.com>
Are you working locally?What exactly is the issue?

Regards,
=A0=A0 =A0Mohammad Tariq
<= div>



On Wed, Dec 12, 2012 at 6:00 PM, imen Me= gdiche <imen.megdiche@gmail.com> wrote:
no


2012/12/12 Mohammad Tariq <dontariq@gmail.com>
Any luck with "localhost:50030"??

Regards,
=A0=A0 =A0Mohammad Tariq



On Wed, Dec 12, 2012 at 5:53 PM, imen Me= gdiche <imen.megdiche@gmail.com> wrote:
i run the job through the command line


2012/12/12 Mohammad Tariq <dontar= iq@gmail.com>
You have to replace "JobTrackerHost"= ; in "JobT= rackerHost:50030" with the actual name=A0of the machine where JobTracker is ru= nning. For example, If you are working on a local cluster, you have to use = "localhost:50030".

=
Are you running your job through the command line or some= IDE?

Regards,
=A0=A0 =A0Mo= hammad Tariq



On Wed, Dec 12, 2012 at 5:42 PM, imen Me= gdiche <imen.megdiche@gmail.com> wrote:
excuse me the data size is 98 MB
<= br>
2012/12/12 imen Megdiche <imen.meg= diche@gmail.com>
the size of data 49 MB and n of map 4=A0 the web UI JobTrackerHost:50030 does not wok, what should i do to make thi= s appear , i work on ubuntu


201= 2/12/12 Mohammad Tariq <dontariq@gmail.com>
Hi Imen,

=A0 =A0 =A0You c= an visit the MR web UI at "JobTrackerHost:50030" and see all the = useful information like no. of mappers, no of reducers, time taken =A0for t= he execution etc.

One quick question for you, what is the size of your data and what is = the no of maps which you are getting right now?

Regards,
=A0=A0 =A0Mohammad Tariq



On Wed, Dec 12, 2012 at 5:11 PM, imen Me= gdiche <imen.megdiche@gmail.com> wrote:
Thank you Mohammad but the number of map tasks still the same in the execut= ion. Do you know how to capture the time spent on execution.
=


2012/12/12 Mohammad Tariq <dontariq@gmail.com>
Hi Imen,

=A0 =A0 You can = add "mapred.map.tasks" property in your mapred-site.xml file.=A0<= /div>

But, it is just a hint for the InputFormat. Actually no. of = maps is actually determined by the no of InputSplits created by the=A0Input= Format.

HTH

<= div>Regards,
=A0=A0 =A0Mohammad Tariq



On Wed, Dec 12, 2012 at 4:11 PM, imen Me= gdiche <imen.megdiche@gmail.com> wrote:
Hi,

I try to force the number of map for the mapreduce job with the= command :
=A0 public static void main(String[] args) throws Exception = {

=A0=A0=A0 =A0=A0=A0 =A0 JobConf conf =3D new JobConf(WordCount.cla= ss);
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 conf.set("mapred.job.trac= ker", "local");
=A0=A0=A0 =A0=A0=A0=A0 conf.set("fs.default.name", "local");=A0=A0=A0 =A0=A0=A0=A0 =A0=A0=A0=A0 conf.setJobName("wordcount");
=A0= =A0=A0=A0
=A0=A0=A0=A0 =A0=A0=A0=A0 conf.setOutputKeyClass(Text.class);=
=A0=A0=A0 =A0=A0=A0=A0 conf.setOutputValueClass(IntWritable.class);
=A0= =A0=A0=A0 =A0=A0=A0=A0
=A0 =A0=A0=A0 =A0=A0=A0=A0 conf.setNumMapTask(6)= ;
=A0=A0=A0=A0 =A0=A0=A0=A0 conf.setMapperClass(Map.class);
=A0=A0=A0= =A0 =A0=A0=A0=A0 conf.setCombinerClass(Reduce.class);
=A0=A0=A0=A0 =A0= =A0=A0=A0 conf.setReducerClass(Reduce.class);
...
}

But it doesn t work.
What can i do to modify the number= of map and reduce tasks.

Thank you











--047d7bf0effe48dc5704d0a7272e--