Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D6356D4FC for ; Wed, 31 Oct 2012 12:37:54 +0000 (UTC) Received: (qmail 6595 invoked by uid 500); 31 Oct 2012 12:37:50 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 6234 invoked by uid 500); 31 Oct 2012 12:37:49 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 6221 invoked by uid 99); 31 Oct 2012 12:37:48 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Oct 2012 12:37:48 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [209.85.220.176] (HELO mail-vc0-f176.google.com) (209.85.220.176) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Oct 2012 12:37:41 +0000 Received: by mail-vc0-f176.google.com with SMTP id gb22so1647134vcb.35 for ; Wed, 31 Oct 2012 05:37:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=date:from:to:message-id:in-reply-to:references:subject:x-mailer :mime-version:content-type:x-gm-message-state; bh=Ce9lJr/dYx0axFdfvj1Hw5E3Ugnxug8uqbt+82t3mRA=; b=QhOUN7r8NpgFJYeLmlVzUDt1yxJ/afHh5H7sGXH9MU+nIGN329ODY1MVCZw4SgKc9r kW6djKmK74qgKLs/ULCX3IFPaeRdaNwWJ1LZ/oc8x41D6gJK2G7h9y6Wx0kFqPuZ0Wu1 oCG/LhbB3P34LrgXcW5FXX88mWK2flTHJcUdql3a7BKpW7sc34J34l6GFx5ovNyuqAd/ /RjsEbsDgRY5K6TCUV0B3pdAaTwbDw4us4siN3nsLUcJ7w2ERFmVZ0WLNVhS+nxLkoUl bDG8wMy2JEMFxeb73fcZ40M3HOQRU88g+tSIMEO9I0LiRtBflM769Er8mclYM3d4mXEg lM5w== Received: by 10.52.26.201 with SMTP id n9mr46437321vdg.126.1351687039326; Wed, 31 Oct 2012 05:37:19 -0700 (PDT) Received: from [192.168.0.12] (sat78-4-82-243-34-156.fbx.proxad.net. [82.243.34.156]) by mx.google.com with ESMTPS id co1sm2028881vdc.10.2012.10.31.05.37.16 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 31 Oct 2012 05:37:18 -0700 (PDT) Date: Wed, 31 Oct 2012 13:37:14 +0100 From: Alexandre Fouche To: user@hadoop.apache.org Message-ID: <99101D55958F44A197CC7D0D07E0EE2C@cleverscale.com> In-Reply-To: References: <11C543073107467A9DF9E458853A0692@cleverscale.com> Subject: Re: Insight on why distcp becomes slower when adding nodemanager X-Mailer: sparrow 1.6.4 (build 1176) MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="50911b7a_25e45d32_fe63" X-Gm-Message-State: ALoCoQkmAfeiOJxk9UufIUfAln55gCIrwFmOUHYSoBnv5j7jGxTW9mbXMWaOYYKvjaIohVDqbYp6 X-Virus-Checked: Checked by ClamAV on apache.org --50911b7a_25e45d32_fe63 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Content-Disposition: inline These instances have no swap. I tried 5 or 6 times in a row, and modified the yarn.nodemanager.resource.memory-mb but it did not help. Later on, i'll replace the openjdk with the Oracle java SE 1.6.31 to see if it improves overall performance. Now i am running everything on medium instances for prototyping, and while this is better, i still find it abusively slow. Maybe bad hadoop performance on less than xlarge memory instances is to be expected on EC2 ? -- Alexandre Fouche Lead operations engineer, cloud architect http://www.cleverscale.com | @cleverscale Sent with Sparrow (http://www.sparrowmailapp.com/?sig) On Monday 29 October 2012 at 20:04, Michael Segel wrote: > how many times did you test it? > > need to rule out aberrations. > > On Oct 29, 2012, at 11:30 AM, Harsh J wrote: > > > On your second low-memory NM instance, did you ensure to lower the > > yarn.nodemanager.resource.memory-mb property specifically to avoid > > swapping due to excessive resource grants? The default offered is 8 GB > > (>> 1.7 GB you have). > > > > On Mon, Oct 29, 2012 at 8:42 PM, Alexandre Fouche > > wrote: > > > Hi, > > > > > > Can someone give some insight on why a "distcp" of 600 files of a few > > > hundred bytes from s3n:// to local hdfs is taking 46s when using a > > > yarn-nodemanager EC2 instance with 16GB memory (which by the way i think is > > > jokingly long), and taking 3mn30s when adding a second yarn-nodemanager (a > > > small instance with 1.7GB memory) ? > > > I would have expected it to be a bit faster, not 5xlonger ! > > > > > > I have the same issue when i stop the small instance nodemanager and restart > > > it to join the processing after the big nodemanager instance was already > > > submitted the job. > > > > > > I am using Cloudera latest Yarn+HDFS on Amazon (rebranded Centos 6) > > > > > > #Staging 14:58:04 root@datanode2:hadoop-yarn: rpm -qa |grep hadoop > > > hadoop-hdfs-datanode-2.0.0+545-1.cdh4.1.1.p0.5.el6.x86_64 > > > hadoop-mapreduce-2.0.0+545-1.cdh4.1.1.p0.5.el6.x86_64 > > > hadoop-0.20-mapreduce-0.20.2+1261-1.cdh4.1.1.p0.4.el6.x86_64 > > > hadoop-yarn-nodemanager-2.0.0+545-1.cdh4.1.1.p0.5.el6.x86_64 > > > hadoop-mapreduce-historyserver-2.0.0+545-1.cdh4.1.1.p0.5.el6.x86_64 > > > hadoop-hdfs-2.0.0+545-1.cdh4.1.1.p0.5.el6.x86_64 > > > hadoop-client-2.0.0+545-1.cdh4.1.1.p0.5.el6.x86_64 > > > hadoop-2.0.0+545-1.cdh4.1.1.p0.5.el6.x86_64 > > > hadoop-yarn-2.0.0+545-1.cdh4.1.1.p0.5.el6.x86_64 > > > > > > > > > #Staging 14:39:51 root@resourcemanager:hadoop-yarn: > > > HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce time hadoop distcp -overwrite > > > s3n://xxx:xxx@s3n.hadoop.cwsdev (mailto:xxx@s3n.hadoop.cwsdev)/* hdfs:///tmp/something/a > > > > > > 12/10/29 14:40:12 INFO tools.DistCp: Input Options: > > > DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, > > > ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', > > > copyStrategy='uniformsize', sourceFileListing=null, > > > sourcePaths=[s3n://xxx:xxx@s3n.hadoop.cwsdev (mailto:xxx@s3n.hadoop.cwsdev)/*], > > > targetPath=hdfs:/tmp/something/a} > > > 12/10/29 14:40:18 WARN conf.Configuration: io.sort.mb is deprecated. > > > Instead, use mapreduce.task.io.sort.mb > > > 12/10/29 14:40:18 WARN conf.Configuration: io.sort.factor is deprecated. > > > Instead, use mapreduce.task.io.sort.factor > > > 12/10/29 14:40:19 INFO mapreduce.JobSubmitter: number of splits:15 > > > 12/10/29 14:40:19 WARN conf.Configuration: mapred.jar is deprecated. > > > Instead, use mapreduce.job.jar > > > 12/10/29 14:40:19 WARN conf.Configuration: > > > mapred.map.tasks.speculative.execution is deprecated. Instead, use > > > mapreduce.map.speculative > > > 12/10/29 14:40:19 WARN conf.Configuration: mapred.reduce.tasks is > > > deprecated. Instead, use mapreduce.job.reduces > > > 12/10/29 14:40:19 WARN conf.Configuration: mapred.mapoutput.value.class > > > is deprecated. Instead, use mapreduce.map.output.value.class > > > 12/10/29 14:40:19 WARN conf.Configuration: mapreduce.map.class is > > > deprecated. Instead, use mapreduce.job.map.class > > > 12/10/29 14:40:19 WARN conf.Configuration: mapred.job.name (http://mapred.job.name) is > > > deprecated. Instead, use mapreduce.job.name (http://mapreduce.job.name) > > > 12/10/29 14:40:19 WARN conf.Configuration: mapreduce.inputformat.class > > > is deprecated. Instead, use mapreduce.job.inputformat.class > > > 12/10/29 14:40:19 WARN conf.Configuration: mapred.output.dir is > > > deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir > > > 12/10/29 14:40:19 WARN conf.Configuration: mapreduce.outputformat.class > > > is deprecated. Instead, use mapreduce.job.outputformat.class > > > 12/10/29 14:40:19 WARN conf.Configuration: mapred.map.tasks is > > > deprecated. Instead, use mapreduce.job.maps > > > 12/10/29 14:40:19 WARN conf.Configuration: mapred.mapoutput.key.class is > > > deprecated. Instead, use mapreduce.map.output.key.class > > > 12/10/29 14:40:19 WARN conf.Configuration: mapred.working.dir is > > > deprecated. Instead, use mapreduce.job.working.dir > > > 12/10/29 14:40:20 INFO mapred.ResourceMgrDelegate: Submitted application > > > application_1351504801306_0015 to ResourceManager at > > > resourcemanager.cwsdev.cleverscale.com/10.60.106.130:8032 (http://resourcemanager.cwsdev.cleverscale.com/10.60.106.130:8032) > > > 12/10/29 14:40:20 INFO mapreduce.Job: The url to track the job: > > > http://ip-10-60-106-130.ec2.internal:8088/proxy/application_1351504801306_0015/ > > > 12/10/29 14:40:20 INFO tools.DistCp: DistCp job-id: > > > job_1351504801306_0015 > > > 12/10/29 14:40:20 INFO mapreduce.Job: Running job: > > > job_1351504801306_0015 > > > 12/10/29 14:40:27 INFO mapreduce.Job: Job job_1351504801306_0015 running > > > in uber mode : false > > > 12/10/29 14:40:27 INFO mapreduce.Job: map 0% reduce 0% > > > 12/10/29 14:40:42 INFO mapreduce.Job: map 6% reduce 0% > > > 12/10/29 14:40:43 INFO mapreduce.Job: map 33% reduce 0% > > > 12/10/29 14:40:44 INFO mapreduce.Job: map 40% reduce 0% > > > 12/10/29 14:40:48 INFO mapreduce.Job: map 46% reduce 0% > > > 12/10/29 14:43:04 INFO mapreduce.Job: map 56% reduce 0% > > > 12/10/29 14:43:05 INFO mapreduce.Job: map 58% reduce 0% > > > 12/10/29 14:43:08 INFO mapreduce.Job: map 62% reduce 0% > > > 12/10/29 14:43:09 INFO mapreduce.Job: map 68% reduce 0% > > > 12/10/29 14:43:15 INFO mapreduce.Job: map 75% reduce 0% > > > 12/10/29 14:43:16 INFO mapreduce.Job: map 82% reduce 0% > > > 12/10/29 14:43:25 INFO mapreduce.Job: map 85% reduce 0% > > > 12/10/29 14:43:26 INFO mapreduce.Job: map 87% reduce 0% > > > 12/10/29 14:43:29 INFO mapreduce.Job: map 90% reduce 0% > > > 12/10/29 14:43:35 INFO mapreduce.Job: map 93% reduce 0% > > > 12/10/29 14:43:37 INFO mapreduce.Job: map 96% reduce 0% > > > 12/10/29 14:43:40 INFO mapreduce.Job: map 100% reduce 0% > > > 12/10/29 14:43:40 INFO mapreduce.Job: Job job_1351504801306_0015 > > > completed successfully > > > 12/10/29 14:43:40 INFO mapreduce.Job: Counters: 35 > > > File System Counters > > > FILE: Number of bytes read=1800 > > > FILE: Number of bytes written=1050895 > > > FILE: Number of read operations=0 > > > FILE: Number of large read operations=0 > > > FILE: Number of write operations=0 > > > HDFS: Number of bytes read=22157 > > > HDFS: Number of bytes written=101379 > > > HDFS: Number of read operations=519 > > > HDFS: Number of large read operations=0 > > > HDFS: Number of write operations=201 > > > S3N: Number of bytes read=101379 > > > S3N: Number of bytes written=0 > > > S3N: Number of read operations=0 > > > S3N: Number of large read operations=0 > > > S3N: Number of write operations=0 > > > Job Counters > > > Launched map tasks=15 > > > Other local map tasks=15 > > > Total time spent by all maps in occupied slots (ms)=12531208 > > > Total time spent by all reduces in occupied slots (ms)=0 > > > Map-Reduce Framework > > > Map input records=57 > > > Map output records=0 > > > Input split bytes=2010 > > > Spilled Records=0 > > > Failed Shuffles=0 > > > Merged Map outputs=0 > > > GC time elapsed (ms)=42324 > > > CPU time spent (ms)=54890 > > > Physical memory (bytes) snapshot=2923872256 > > > Virtual memory (bytes) snapshot=12526301184 > > > Total committed heap usage (bytes)=1618280448 > > > File Input Format Counters > > > Bytes Read=20147 > > > File Output Format Counters > > > Bytes Written=0 > > > org.apache.hadoop.tools.mapred.CopyMapper$Counter > > > BYTESCOPIED=101379 > > > BYTESEXPECTED=101379 > > > COPY=57 > > > > > > 6.90user 0.59system 3:29.17elapsed 3%CPU (0avgtext+0avgdata > > > 819392maxresident)k > > > 0inputs+344outputs (0major+62847minor)pagefaults 0swaps > > > > > > > > > > > > -- > > > Alexandre Fouche > > > > > > > > > > > > > -- > > Harsh J > > > > > --50911b7a_25e45d32_fe63 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline
These instances have no swap. I tried 5 or 6 times in a row, and modi= fied the yarn.nodemanager.resource.memory-mb but it did not help. La= ter on, i'll replace the openjdk with the Oracle java SE 1.6.31 to see if= it improves overall performance.
Now i am running everything on medi= um instances for prototyping, and while this is better, i still find it a= busively slow. Maybe bad hadoop performance on less than xlarge memory in= stances is to be expected on EC2 =3F


--
Alexandre =46ouche
Lead operations engineer, cloud archite= ct
http://www.cleverscale.com =7C =40cleverscale
Sent with Sparrow

=20

On Monday 29 October 2= 012 at 20:04, Michael Segel wrote:

how many times did you test it=3F=

need to rule out aberrations.

On Oct 29, 2012, at 11:30 AM, Harsh J <harsh=40cloudera.com> wrote:

On your second low-memory NM = instance, did you ensure to lower the
yarn.nodemanager.resource= .memory-mb property specifically to avoid
swapping due to exces= sive resource grants=3F The default offered is 8 GB
(>> 1= .7 GB you have).

On Mon, Oct 29, 2012 at 8:42 PM= , Alexandre =46ouche
<alexandre.fouche=40cleverscale.com> wrote:
Hi,

Can= someone give some insight on why a =22distcp=22 of 600 files of a few
hundred bytes from s3n:// to local hdfs is taking 46s when using = a
yarn-nodemanager EC2 instance with 16GB memory (which by the = way i think is
jokingly long), and taking 3mn30s when adding a = second yarn-nodemanager (a
small instance with 1.7GB memory) =3F=
I would have expected it to be a bit faster, not 5xlonger =21<= /div>

I have the same issue when i stop the small inst= ance nodemanager and restart
it to join the processing after th= e big nodemanager instance was already
submitted the job.
=

I am using Cloudera latest Yarn+HD=46S on Amazon (reb= randed Centos 6)

=23Staging 14:58:04 root=40d= atanode2:hadoop-yarn: rpm -qa =7Cgrep hadoop
hadoop-hdfs-dat= anode-2.0.0+545-1.cdh4.1.1.p0.5.el6.x86=5F64
hadoop-mapreduc= e-2.0.0+545-1.cdh4.1.1.p0.5.el6.x86=5F64
hadoop-0.20-mapredu= ce-0.20.2+1261-1.cdh4.1.1.p0.4.el6.x86=5F64
hadoop-yarn-node= manager-2.0.0+545-1.cdh4.1.1.p0.5.el6.x86=5F64
hadoop-mapred= uce-historyserver-2.0.0+545-1.cdh4.1.1.p0.5.el6.x86=5F64
had= oop-hdfs-2.0.0+545-1.cdh4.1.1.p0.5.el6.x86=5F64
hadoop-clien= t-2.0.0+545-1.cdh4.1.1.p0.5.el6.x86=5F64
hadoop-2.0.0+545-1.= cdh4.1.1.p0.5.el6.x86=5F64
hadoop-yarn-2.0.0+545-1.cdh4.1.1.= p0.5.el6.x86=5F64


=23Staging 1= 4:39:51 root=40resourcemanager:hadoop-yarn:
HADOOP=5FMAPRED=5FH= OME=3D/usr/lib/hadoop-mapreduce time hadoop distcp -overwrite
s= 3n://xxx:xxx=40s3n.hadoop.= cwsdev/* hdfs:///tmp/something/a

12/10/29= 14:40:12 IN=46O tools.DistCp: Input Options:
DistCpOptions=7Ba= tomicCommit=3Dfalse, sync=46older=3Dfalse, deleteMissing=3Dfalse,
ignore=46ailures=3Dfalse, maxMaps=3D20, sslConfiguration=46ile=3D'null= ',
copyStrategy=3D'uniformsize', source=46ileListing=3Dnull,
sourcePaths=3D=5Bs3n://xxx:xxx=40s3n.hadoop.cwsdev/*=5D,
targetPath=3Dhdfs:/t= mp/something/a=7D
12/10/29 14:40:18 WARN conf.Configuration:= io.sort.mb is deprecated.
Instead, use mapreduce.task.io.sort.= mb
12/10/29 14:40:18 WARN conf.Configuration: io.sort.factor= is deprecated.
Instead, use mapreduce.task.io.sort.factor
12/10/29 14:40:19 IN=46O mapreduce.JobSubmitter: number of split= s:15
12/10/29 14:40:19 WARN conf.Configuration: mapred.jar i= s deprecated.
Instead, use mapreduce.job.jar
12/10= /29 14:40:19 WARN conf.Configuration:
mapred.map.tasks.speculat= ive.execution is deprecated. Instead, use
mapreduce.map.specula= tive
12/10/29 14:40:19 WARN conf.Configuration: mapred.reduc= e.tasks is
deprecated. Instead, use mapreduce.job.reduces
=
12/10/29 14:40:19 WARN conf.Configuration: mapred.mapoutput.value= .class
is deprecated. Instead, use mapreduce.map.output.value.c= lass
12/10/29 14:40:19 WARN conf.Configuration: mapreduce.ma= p.class is
deprecated. Instead, use mapreduce.job.map.class
12/10/29 14:40:19 WARN conf.Configuration: mapred.job.name is
deprecated. Instead, = use mapreduce.job.name
12/10/29 14:40:19 WARN conf.Configuration: mapreduce.inputformat= .class
is deprecated. Instead, use mapreduce.job.inputformat.cl= ass
12/10/29 14:40:19 WARN conf.Configuration: mapred.output= .dir is
deprecated. Instead, use mapreduce.output.fileoutputfor= mat.outputdir
12/10/29 14:40:19 WARN conf.Configuration: map= reduce.outputformat.class
is deprecated. Instead, use mapreduce= .job.outputformat.class
12/10/29 14:40:19 WARN conf.Configur= ation: mapred.map.tasks is
deprecated. Instead, use mapreduce.j= ob.maps
12/10/29 14:40:19 WARN conf.Configuration: mapred.ma= poutput.key.class is
deprecated. Instead, use mapreduce.map.out= put.key.class
12/10/29 14:40:19 WARN conf.Configuration: map= red.working.dir is
deprecated. Instead, use mapreduce.job.worki= ng.dir
12/10/29 14:40:20 IN=46O mapred.ResourceMgrDelegate: = Submitted application
application=5F1351504801306=5F0015 to Res= ourceManager at
12/10/29 14:40:20 IN=46O mapreduce.Jo= b: The url to track the job:
12/10/29 14:40:20 IN=46O tools.DistCp: DistCp job-id= :
job=5F1351504801306=5F0015
12/10/29 14:40:20 IN=46= O mapreduce.Job: Running job:
job=5F1351504801306=5F0015
<= div> 12/10/29 14:40:27 IN=46O mapreduce.Job: Job job=5F1351504801306=5F= 0015 running
in uber mode : false
12/10/29 14:40:2= 7 IN=46O mapreduce.Job: map 0% reduce 0%
12/10/29 14:40:42 = IN=46O mapreduce.Job: map 6% reduce 0%
12/10/29 14:40:43 IN= =46O mapreduce.Job: map 33% reduce 0%
12/10/29 14:40:44 IN=46= O mapreduce.Job: map 40% reduce 0%
12/10/29 14:40:48 IN=46O= mapreduce.Job: map 46% reduce 0%
12/10/29 14:43:04 IN=46O = mapreduce.Job: map 56% reduce 0%
12/10/29 14:43:05 IN=46O m= apreduce.Job: map 58% reduce 0%
12/10/29 14:43:08 IN=46O ma= preduce.Job: map 62% reduce 0%
12/10/29 14:43:09 IN=46O map= reduce.Job: map 68% reduce 0%
12/10/29 14:43:15 IN=46O mapr= educe.Job: map 75% reduce 0%
12/10/29 14:43:16 IN=46O mapre= duce.Job: map 82% reduce 0%
12/10/29 14:43:25 IN=46O mapred= uce.Job: map 85% reduce 0%
12/10/29 14:43:26 IN=46O mapredu= ce.Job: map 87% reduce 0%
12/10/29 14:43:29 IN=46O mapreduc= e.Job: map 90% reduce 0%
12/10/29 14:43:35 IN=46O mapreduce= .Job: map 93% reduce 0%
12/10/29 14:43:37 IN=46O mapreduce.= Job: map 96% reduce 0%
12/10/29 14:43:40 IN=46O mapreduce.J= ob: map 100% reduce 0%
12/10/29 14:43:40 IN=46O mapreduce.J= ob: Job job=5F1351504801306=5F0015
completed successfully
=
12/10/29 14:43:40 IN=46O mapreduce.Job: Counters: 35
= =46ile System Counters
=46ILE: Number of bytes = read=3D1800
=46ILE: Number of bytes written=3D105089= 5
=46ILE: Number of read operations=3D0
= =46ILE: Number of large read operations=3D0
= =46ILE: Number of write operations=3D0
HD=46S: Numb= er of bytes read=3D22157
HD=46S: Number of bytes wri= tten=3D101379
HD=46S: Number of read operations=3D51= 9
HD=46S: Number of large read operations=3D0
<= div> HD=46S: Number of write operations=3D201
= S3N: Number of bytes read=3D101379
S3N: Number o= f bytes written=3D0
S3N: Number of read operations=3D= 0
S3N: Number of large read operations=3D0
S3N: Number of write operations=3D0
Job Coun= ters
Launched map tasks=3D15
Ot= her local map tasks=3D15
Total time spent by all map= s in occupied slots (ms)=3D12531208
Total time spent= by all reduces in occupied slots (ms)=3D0
Map-Reduce =46= ramework
Map input records=3D57
= Map output records=3D0
Input split bytes=3D2010
Spilled Records=3D0
=46ailed Shuff= les=3D0
Merged Map outputs=3D0
= GC time elapsed (ms)=3D42324
CPU time spent (ms)=3D5= 4890
Physical memory (bytes) snapshot=3D2923872256
Virtual memory (bytes) snapshot=3D12526301184
Total committed heap usage (bytes)=3D1618280448
= =46ile Input =46ormat Counters
Bytes Read=3D20= 147
=46ile Output =46ormat Counters
= Bytes Written=3D0
org.apache.hadoop.tools.mapred.CopyMa= pper=24Counter
BYTESCOPIED=3D101379
= BYTESEXPECTED=3D101379
COPY=3D57

=
6.90user 0.59system 3:29.17elapsed 3%CPU (0avgtext+0avgdata=
819392maxresident)k
0inputs+344outputs (0major+62= 847minor)pagefaults 0swaps



--
Alexandre =46ouche


--
Harsh J
=20 =20 =20 =20
=20

--50911b7a_25e45d32_fe63--