Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id DBDD5D6EC for ; Wed, 29 Aug 2012 17:12:33 +0000 (UTC) Received: (qmail 56139 invoked by uid 500); 29 Aug 2012 17:12:28 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 56047 invoked by uid 500); 29 Aug 2012 17:12:28 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 56035 invoked by uid 99); 29 Aug 2012 17:12:28 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 29 Aug 2012 17:12:28 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of lleung@ddn.com designates 74.62.46.229 as permitted sender) Received: from [74.62.46.229] (HELO mail.datadirectnet.com) (74.62.46.229) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 29 Aug 2012 17:12:19 +0000 Received: from LAX-EX-CAHT2.datadirect.datadirectnet.com (10.8.103.82) by dermtp01.datadirect.datadirectnet.com (10.8.16.38) with Microsoft SMTP Server (TLS) id 8.3.192.1; Wed, 29 Aug 2012 10:11:57 -0700 Received: from LAX-EX-MB2.datadirect.datadirectnet.com ([fe80::96:2379:f1b2:ef2d]) by LAX-EX-CAHT2.datadirect.datadirectnet.com ([fe80::5097:7f65:334d:ade5%12]) with mapi id 14.02.0298.004; Wed, 29 Aug 2012 10:11:57 -0700 From: Leo Leung To: "user@hadoop.apache.org" Subject: RE: MRBench Maps strange behaviour Thread-Topic: MRBench Maps strange behaviour Thread-Index: AQHNhQh0IVYMoE7jPkOcYkQClo8mx5dwwViAgAAeF4CAAA0oAIAAA74AgAAWrdA= Date: Wed, 29 Aug 2012 17:11:56 +0000 Message-ID: <1C40D33AEC9DCA40A06F4637B6E58ABF1D2B6978@LAX-EX-MB2.datadirect.datadirectnet.com> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.40.19.65] Content-Type: multipart/alternative; boundary="_000_1C40D33AEC9DCA40A06F4637B6E58ABF1D2B6978LAXEXMB2datadir_" MIME-Version: 1.0 --_000_1C40D33AEC9DCA40A06F4637B6E58ABF1D2B6978LAXEXMB2datadir_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable mrbench "actual lunched map task" depends on the number of inputLines. So in your first case, you did specify more input that maps, hence all maps= are lunched. The default inputLines is 1, which is (cough cough) quite oblivious to th= e number of maps you specify. (That was your second case) From: praveenesh kumar [mailto:praveenesh@gmail.com] Sent: Wednesday, August 29, 2012 1:45 AM To: user@hadoop.apache.org Subject: Re: MRBench Maps strange behaviour Then the question arises how MRBench is using the parameters : According to the mail he send... he is running MRBench with the following p= arameter: hadoop jar /usr/lib/hadoop-0.20/hadoop-test.jar mrbench -maps 10 -reduces 1= 0 I guess he is assuming the MRbench to launch 10 mappers and 10 reducers. Bu= t he is getting some different results which are visible in the counters an= d we can use all our map and input-split logics to justify the counter outp= uts. The question arises here -- how can we use MRBench -- what it provides you = ? How can we control it to run with different parameters to do some benchma= rking ? Can someone explain how to use MRBench and what it exactly does. Regards, Praveenesh On Wed, Aug 29, 2012 at 3:31 AM, Hemanth Yamijala > wrote: Assume you are asking about what is the exact number of maps launched. If yes, then the output of the MRBench run is printing the counter "Launched map tasks". That is the exact value of maps launched. Thanks Hemanth On Wed, Aug 29, 2012 at 1:14 PM, Gaurav Dasgupta > wrote: > Hi Hemanth, > > Thanks for the reply. > Can you tell me how can I calculate or ensure from the counters what shou= ld > be the exact number of Maps? > Thanks, > Gaurav Dasgupta > On Wed, Aug 29, 2012 at 11:26 AM, Hemanth Yamijala > > wrote: >> >> Hi, >> >> The number of maps specified to any map reduce program (including >> those part of MRBench) is generally only a hint, and the actual number >> of maps will be influenced in typical cases by the amount of data >> being processed. You can take a look at this wiki link to understand >> more: http://wiki.apache.org/hadoop/HowManyMapsAndReduces >> >> In the examples below, since the data you've generated is different, >> the number of mappers are different. To be able to judge your >> benchmark results, you'd need to benchmark against the same data (or >> at least same type of type - i.e. size and type). >> >> The number of maps printed at the end is straight from the input >> specified and doesn't reflect what the job actually ran with. The >> information from the counters is the right one. >> >> Thanks >> Hemanth >> >> On Tue, Aug 28, 2012 at 4:02 PM, Gaurav Dasgupta > >> wrote: >> > Hi All, >> > >> > I executed the "MRBench" program from "hadoop-test.jar" in my 12 node >> > CDH3 >> > cluster. After executing, I had some strange observations regarding th= e >> > number of Maps it ran. >> > >> > First I ran the command: >> > hadoop jar /usr/lib/hadoop-0.20/hadoop-test.jar mrbench -numRuns 3 -ma= ps >> > 200 >> > -reduces 200 -inputLines 1024 -inputType random >> > And I could see that the actual number of Maps it ran was 201 (for all >> > the 3 >> > runs) instead of 200 (Though the end report displays the launched to b= e >> > 200). Here is the console report: >> > >> > >> > 12/08/28 04:34:35 INFO mapred.JobClient: Job complete: >> > job_201208230144_0035 >> > >> > 12/08/28 04:34:35 INFO mapred.JobClient: Counters: 28 >> > >> > 12/08/28 04:34:35 INFO mapred.JobClient: Job Counters >> > >> > 12/08/28 04:34:35 INFO mapred.JobClient: Launched reduce tasks=3D2= 00 >> > >> > 12/08/28 04:34:35 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=3D61720= 9 >> > >> > 12/08/28 04:34:35 INFO mapred.JobClient: Total time spent by all >> > reduces >> > waiting after reserving slots (ms)=3D0 >> > >> > 12/08/28 04:34:35 INFO mapred.JobClient: Total time spent by all >> > maps >> > waiting after reserving slots (ms)=3D0 >> > >> > 12/08/28 04:34:35 INFO mapred.JobClient: Rack-local map tasks=3D13= 7 >> > >> > 12/08/28 04:34:35 INFO mapred.JobClient: Launched map tasks=3D201 >> > >> > 12/08/28 04:34:35 INFO mapred.JobClient: Data-local map tasks=3D64 >> > >> > 12/08/28 04:34:35 INFO mapred.JobClient: >> > SLOTS_MILLIS_REDUCES=3D1756882 >> > >> > >> > >> > Again, I ran the MRBench for just 10 Maps and 10 Reduces: >> > >> > hadoop jar /usr/lib/hadoop-0.20/hadoop-test.jar mrbench -maps 10 >> > -reduces 10 >> > >> > >> > >> > This time the actual number of Maps were only 2 and again the end repo= rt >> > displays Maps Lauched to be 10. The console output: >> > >> > >> > >> > 12/08/28 05:05:35 INFO mapred.JobClient: Job complete: >> > job_201208230144_0040 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Counters: 27 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Job Counters >> > 12/08/28 05:05:35 INFO mapred.JobClient: Launched reduce tasks=3D2= 0 >> > 12/08/28 05:05:35 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=3D6648 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Total time spent by all >> > reduces >> > waiting after reserving slots (ms)=3D0 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Total time spent by all >> > maps >> > waiting after reserving slots (ms)=3D0 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Launched map tasks=3D2 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Data-local map tasks=3D2 >> > 12/08/28 05:05:35 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=3D16= 3257 >> > 12/08/28 05:05:35 INFO mapred.JobClient: FileSystemCounters >> > 12/08/28 05:05:35 INFO mapred.JobClient: FILE_BYTES_READ=3D407 >> > 12/08/28 05:05:35 INFO mapred.JobClient: HDFS_BYTES_READ=3D258 >> > 12/08/28 05:05:35 INFO mapred.JobClient: FILE_BYTES_WRITTEN=3D1072= 596 >> > 12/08/28 05:05:35 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=3D3 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Map-Reduce Framework >> > 12/08/28 05:05:35 INFO mapred.JobClient: Map input records=3D1 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Reduce shuffle bytes=3D64= 7 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Spilled Records=3D2 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Map output bytes=3D5 >> > 12/08/28 05:05:35 INFO mapred.JobClient: CPU time spent (ms)=3D170= 70 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Total committed heap usag= e >> > (bytes)=3D6218842112 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Map input bytes=3D2 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Combine input records=3D0 >> > 12/08/28 05:05:35 INFO mapred.JobClient: SPLIT_RAW_BYTES=3D254 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Reduce input records=3D1 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Reduce input groups=3D1 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Combine output records=3D= 0 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Physical memory (bytes) >> > snapshot=3D3348828160 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Reduce output records=3D1 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Virtual memory (bytes) >> > snapshot=3D22955810816 >> > 12/08/28 05:05:35 INFO mapred.JobClient: Map output records=3D1 >> > DataLines Maps Reduces AvgTime (milliseconds) >> > 1 20 20 17451 >> > >> > Can some one please help me understand this behaviour of Hadoop in thi= s >> > case. My main purpose of running a MRBench is to calculate the Average >> > time >> > for certain amount of Maps, Reduces, InputLines etc. If the number of >> > Maps >> > is not what I submitted, then how can I judge my benchmark results? >> > >> > >> > >> > Thanks, >> > >> > Gaurav Dasgupta > > --_000_1C40D33AEC9DCA40A06F4637B6E58ABF1D2B6978LAXEXMB2datadir_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

mrbench “actual lun= ched map task” depends on the number of inputLines.=

 <= /p>

So in your first case, yo= u did specify more input that maps, hence all maps are lunched.<= /span>

 <= /p>

The default inputLines is= 1,  which is (cough cough)  quite oblivious to the number of map= s you specify.

(That was your second cas= e)

 <= /p>

 <= /p>

From: praveene= sh kumar [mailto:praveenesh@gmail.com]
Sent: Wednesday, August 29, 2012 1:45 AM
To: user@hadoop.apache.org
Subject: Re: MRBench Maps strange behaviour

 

Then the question ari= ses how MRBench is using the parameters :
According to the mail he send... he is running MRBench with the following p= arameter:

hadoop jar /usr/lib/hadoop-0.20/hadoop-test.jar mrbench -maps 10 -reduces 1= 0


I guess he is assuming the MRbench to launch 10 mappers and 10 reducers. Bu= t he is getting some different results which are visible in the counters an= d we can use all our map and input-split logics to justify the counter outp= uts.

The question arises here -- how can we use MRBench -- what it provides you = ? How can we control it to run with different parameters to do some benchma= rking ? Can someone explain how to use MRBench and what it exactly does.
Regards,
Praveenesh

On Wed, Aug 29, 2012 at 3:31 AM, Hemanth Yamijala &l= t;yhemanth@gmail.co= m> wrote:

Assume you are asking about what is the exact number= of maps launched.
If yes, then the output of the MRBench run is printing the counter
"Launched map tasks". That is the exact value of maps launched.
Thanks
Hemanth<= o:p>


On Wed, Aug 29, 2012 at 1:14 PM, Gaurav Dasgupta <gdsayshi@gmail.com> wrote:
> Hi Hemanth,
>
> Thanks for the reply.
> Can you tell me how can I calculate or ensure from the counters what s= hould
> be the exact number of Maps?
> Thanks,
> Gaurav Dasgupta
> On Wed, Aug 29, 2012 at 11:26 AM, Hemanth Yamijala <yhemanth@gmail.com>
> wrote:
>>
>> Hi,
>>
>> The number of maps specified to any map reduce program (including<= br> >> those part of MRBench) is generally only a hint, and the actual nu= mber
>> of maps will be influenced in typical cases by the amount of data<= br> >> being processed. You can take a look at this wiki link to understa= nd
>> more: http://wiki.apache.org/hadoop/HowManyMapsAndReduces
>>
>> In the examples below, since the data you've generated is differen= t,
>> the number of mappers are different. To be able to judge your
>> benchmark results, you'd need to benchmark against the same data (= or
>> at least same type of type - i.e. size and type).
>>
>> The number of maps printed at the end is straight from the input >> specified and doesn't reflect what the job actually ran with. The<= br> >> information from the counters is the right one.
>>
>> Thanks
>> Hemanth
>>
>> On Tue, Aug 28, 2012 at 4:02 PM, Gaurav Dasgupta <gdsayshi@gmail.com>
>> wrote:
>> > Hi All,
>> >
>> > I executed the "MRBench" program from "hadoop-= test.jar" in my 12 node
>> > CDH3
>> > cluster. After executing, I had some strange observations reg= arding the
>> > number of Maps it ran.
>> >
>> > First I ran the command:
>> > hadoop jar /usr/lib/hadoop-0.20/hadoop-test.jar mrbench -numR= uns 3 -maps
>> > 200
>> > -reduces 200 -inputLines 1024 -inputType random
>> > And I could see that the actual number of Maps it ran was 201= (for all
>> > the 3
>> > runs) instead of 200 (Though the end report displays the laun= ched to be
>> > 200). Here is the console report:
>> >
>> >
>> > 12/08/28 04:34:35 INFO mapred.JobClient: Job complete:
>> > job_201208230144_0035
>> >
>> > 12/08/28 04:34:35 INFO mapred.JobClient: Counters: 28
>> >
>> > 12/08/28 04:34:35 INFO mapred.JobClient:   Job Counters<= br> >> >
>> > 12/08/28 04:34:35 INFO mapred.JobClient:     Launch= ed reduce tasks=3D200
>> >
>> > 12/08/28 04:34:35 INFO mapred.JobClient:     SLOTS_= MILLIS_MAPS=3D617209
>> >
>> > 12/08/28 04:34:35 INFO mapred.JobClient:     Total = time spent by all
>> > reduces
>> > waiting after reserving slots (ms)=3D0
>> >
>> > 12/08/28 04:34:35 INFO mapred.JobClient:     Total = time spent by all
>> > maps
>> > waiting after reserving slots (ms)=3D0
>> >
>> > 12/08/28 04:34:35 INFO mapred.JobClient:     Rack-l= ocal map tasks=3D137
>> >
>> > 12/08/28 04:34:35 INFO mapred.JobClient:     Launch= ed map tasks=3D201
>> >
>> > 12/08/28 04:34:35 INFO mapred.JobClient:     Data-l= ocal map tasks=3D64
>> >
>> > 12/08/28 04:34:35 INFO mapred.JobClient:
>> > SLOTS_MILLIS_REDUCES=3D1756882
>> >
>> >
>> >
>> > Again, I ran the MRBench for just 10 Maps and 10 Reduces:
>> >
>> > hadoop jar /usr/lib/hadoop-0.20/hadoop-test.jar mrbench -maps= 10
>> > -reduces 10
>> >
>> >
>> >
>> > This time the actual number of Maps were only 2 and again the= end report
>> > displays Maps Lauched to be 10. The console output:
>> >
>> >
>> >
>> > 12/08/28 05:05:35 INFO mapred.JobClient: Job complete:
>> > job_201208230144_0040
>> > 12/08/28 05:05:35 INFO mapred.JobClient: Counters: 27
>> > 12/08/28 05:05:35 INFO mapred.JobClient:   Job Counters<= br> >> > 12/08/28 05:05:35 INFO mapred.JobClient:     Launch= ed reduce tasks=3D20
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     SLOTS_= MILLIS_MAPS=3D6648
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Total = time spent by all
>> > reduces
>> > waiting after reserving slots (ms)=3D0
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Total = time spent by all
>> > maps
>> > waiting after reserving slots (ms)=3D0
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Launch= ed map tasks=3D2
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Data-l= ocal map tasks=3D2
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     SLOTS_= MILLIS_REDUCES=3D163257
>> > 12/08/28 05:05:35 INFO mapred.JobClient:   FileSystemCou= nters
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     FILE_B= YTES_READ=3D407
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     HDFS_B= YTES_READ=3D258
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     FILE_B= YTES_WRITTEN=3D1072596
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     HDFS_B= YTES_WRITTEN=3D3
>> > 12/08/28 05:05:35 INFO mapred.JobClient:   Map-Reduce Fr= amework
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Map in= put records=3D1
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Reduce= shuffle bytes=3D647
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Spille= d Records=3D2
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Map ou= tput bytes=3D5
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     CPU ti= me spent (ms)=3D17070
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Total = committed heap usage
>> > (bytes)=3D6218842112
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Map in= put bytes=3D2
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Combin= e input records=3D0
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     SPLIT_= RAW_BYTES=3D254
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Reduce= input records=3D1
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Reduce= input groups=3D1
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Combin= e output records=3D0
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Physic= al memory (bytes)
>> > snapshot=3D3348828160
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Reduce= output records=3D1
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Virtua= l memory (bytes)
>> > snapshot=3D22955810816
>> > 12/08/28 05:05:35 INFO mapred.JobClient:     Map ou= tput records=3D1
>> > DataLines Maps Reduces AvgTime (milliseconds)
>> > 1                20 &= nbsp;   20           17451
>> >
>> > Can some one please help me understand this behaviour of Hado= op in this
>> > case. My main purpose of running a MRBench is to calculate th= e Average
>> > time
>> > for certain amount of Maps, Reduces, InputLines etc. If the n= umber of
>> > Maps
>> > is not what I submitted, then how can I judge my benchmark re= sults?
>> >
>> >
>> >
>> > Thanks,
>> >
>> > Gaurav Dasgupta
>
>

 

--_000_1C40D33AEC9DCA40A06F4637B6E58ABF1D2B6978LAXEXMB2datadir_--