Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1881410D0E for ; Tue, 11 Mar 2014 04:13:01 +0000 (UTC) Received: (qmail 79754 invoked by uid 500); 11 Mar 2014 04:12:53 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 79500 invoked by uid 500); 11 Mar 2014 04:12:51 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 79493 invoked by uid 99); 11 Mar 2014 04:12:51 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 11 Mar 2014 04:12:51 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,T_REMOTE_IMAGE X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of unmeshabiju@gmail.com designates 209.85.128.177 as permitted sender) Received: from [209.85.128.177] (HELO mail-ve0-f177.google.com) (209.85.128.177) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 11 Mar 2014 04:12:46 +0000 Received: by mail-ve0-f177.google.com with SMTP id sa20so7858165veb.8 for ; Mon, 10 Mar 2014 21:12:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=t/ERpiAkMc2HWDexnq5hJsPSEnv0VMW/44qsgZfQau0=; b=H0doWV5ddm0JNHsDUP8uOeEXmyFQJUFdNHK365kv3T80fMbHHgRD8aUUZe7WdL1Jei NbUrQ6BsW4NLyay3F/CjtOxasm03yDqP/lQaJsVOs5V9JONe5Fz7g3qNmSO75DVzoTrp hesQ36x7eZ+G0zgQkHLnCGaKW33QFUEpYUEPiMsrNjLJZYmOQUgosV+TLokgW7Tx9ojS zUVhxKqGmrSveMEUv8Dy7443E8s6R0KdexRrM9DlbyXsuLKk7mCFBH5REb0/CtGeRpLG YrfUnv0IAaZ8sOduE+ZjY2RkwW1E0JFf3brV7PcM9/wMMQaJoJ36Rj/+TzdgbLis3tti Gh5A== X-Received: by 10.58.185.145 with SMTP id fc17mr2023991vec.14.1394511145533; Mon, 10 Mar 2014 21:12:25 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.235.70 with HTTP; Mon, 10 Mar 2014 21:11:45 -0700 (PDT) In-Reply-To: References: From: unmesha sreeveni Date: Tue, 11 Mar 2014 09:41:45 +0530 Message-ID: Subject: Re: GC overhead limit exceeded To: User Hadoop Content-Type: multipart/alternative; boundary=047d7b67052f77071904f44cedd8 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b67052f77071904f44cedd8 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Try to increase the memory for datanode and see.This need to restart hadoop export HADOOP_DATANODE_OPTS=3D"-Xmx10g" This will set the heap to 10gb You can also add this in start of hadoop-env.sh file On Tue, Mar 11, 2014 at 9:02 AM, haihong lu wrote: > i have tried both of the methods you side, but the problem still exists. > Thanks all the same. by the way, my hadoop version is 2.2.0, so the > parameter "mapreduce.map.memory.mb =3D3072" added to mapred-site.xml may= be > has no effect. I have looked for this parameter in the document of hadoop= , > but did not found it. > > > On Fri, Mar 7, 2014 at 4:57 PM, shashwat shriparv < > dwivedishashwat@gmail.com> wrote: > >> Check this out >> >> >> http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with= -java-lang-OutOfMemoryError-GC-overhead-limit-exceeded >> >> >> >> * Warm Regards_**=E2=88=9E_* >> * Shashwat Shriparv* >> [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9][image: >> https://twitter.com/shriparv] [image: >> https://www.facebook.com/shriparv] [i= mage: >> http://google.com/+ShashwatShriparv][image: >> http://www.youtube.com/user/sShriparv/videos][image: >> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/] >> >> >> >> On Fri, Mar 7, 2014 at 12:04 PM, haihong lu wrote: >> >>> Hi: >>> >>> i have a problem when run Hibench with hadoop-2.2.0, the wrong >>> message list as below >>> >>> 14/03/07 13:54:53 INFO mapreduce.Job: map 19% reduce 0% >>> 14/03/07 13:54:54 INFO mapreduce.Job: map 21% reduce 0% >>> 14/03/07 14:00:26 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000020_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:00:27 INFO mapreduce.Job: map 20% reduce 0% >>> 14/03/07 14:00:40 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000008_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:00:41 INFO mapreduce.Job: map 19% reduce 0% >>> 14/03/07 14:00:59 INFO mapreduce.Job: map 20% reduce 0% >>> 14/03/07 14:00:59 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000015_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:01:00 INFO mapreduce.Job: map 19% reduce 0% >>> 14/03/07 14:01:03 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000023_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:01:11 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000026_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:01:35 INFO mapreduce.Job: map 20% reduce 0% >>> 14/03/07 14:01:35 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000019_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:01:36 INFO mapreduce.Job: map 19% reduce 0% >>> 14/03/07 14:01:43 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000007_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:02:00 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000000_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:02:01 INFO mapreduce.Job: map 18% reduce 0% >>> 14/03/07 14:02:23 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000021_0, Status : FAILED >>> Error: Java heap space >>> 14/03/07 14:02:24 INFO mapreduce.Job: map 17% reduce 0% >>> 14/03/07 14:02:31 INFO mapreduce.Job: map 18% reduce 0% >>> 14/03/07 14:02:33 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000029_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:02:34 INFO mapreduce.Job: map 17% reduce 0% >>> 14/03/07 14:02:38 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000010_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:02:41 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000018_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:02:43 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000014_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:02:47 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000028_0, Status : FAILED >>> Error: Java heap space >>> 14/03/07 14:02:50 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000002_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:02:51 INFO mapreduce.Job: map 16% reduce 0% >>> 14/03/07 14:02:51 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000005_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:02:52 INFO mapreduce.Job: map 15% reduce 0% >>> 14/03/07 14:02:55 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000006_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:02:57 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000027_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:02:58 INFO mapreduce.Job: map 14% reduce 0% >>> 14/03/07 14:03:04 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000009_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000017_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:03:05 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000022_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:03:06 INFO mapreduce.Job: map 12% reduce 0% >>> 14/03/07 14:03:10 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000001_0, Status : FAILED >>> Error: GC overhead limit exceeded >>> 14/03/07 14:03:11 INFO mapreduce.Job: map 13% reduce 0% >>> 14/03/07 14:03:11 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0010_m_000024_0, Status : FAILED >>> >>> and then i add a parameter "mapred.child.java.opts" to the file >>> "mapred-site.xml", >>> >>> mapred.child.java.opts >>> -Xmx1024m >>> >>> then another error occurs as below >>> >>> 14/03/07 11:21:51 INFO mapreduce.Job: map 0% reduce 0% >>> 14/03/07 11:21:59 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0003_m_000002_0, Status : FAILED >>> Container [pid=3D5592,containerID=3Dcontainer_1394160253524_0003_01_000= 004] >>> is running beyond virtual memory limits. Current usage: 112.6 MB of 1 G= B >>> physical memory used; 2.7 GB of >>> >>> 2.1 GB virtual memory used. Killing container. >>> Dump of the process-tree for container_1394160253524_0003_01_000004 : >>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) >>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE >>> |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520 >>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue >>> -Dhadoop.metrics.log.level=3DWARN -Xmx2048m - >>> >>> Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcac= he/application_1394160253524_0003/container_1394160253524_0003_01_000004/tm= p >>> - >>> >>> Dlog4j.configuration=3Dcontainer-log4j.properties >>> -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/applicatio= n_1394160253524_0003/container_1394160253524_0003_01_000004 >>> >>> -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA >>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 >>> attempt_1394160253524_0003_m_000002_0 4 >>> |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c >>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue >>> -Dhadoop.metrics.log.level=3DWARN -Xmx2048m - >>> >>> Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcac= he/application_1394160253524_0003/container_1394160253524_0003_01_000004/tm= p >>> - >>> >>> Dlog4j.configuration=3Dcontainer-log4j.properties >>> -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/applicatio= n_1394160253524_0003/container_1394160253524_0003_01_000004 >>> >>> -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA >>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 >>> attempt_1394160253524_0003_m_000002_0 4 >>> >>> >>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/containe= r_1394160253524_0003_01_000004/stdout >>> >>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/containe= r_1394160253524_0003_01_000004/stderr >>> >>> >>> Container killed on request. Exit code is 143 >>> 14/03/07 11:22:02 INFO mapreduce.Job: Task Id : >>> attempt_1394160253524_0003_m_000001_0, Status : FAILED >>> Container [pid=3D5182,containerID=3Dcontainer_1394160253524_0003_01_000= 003] >>> is running beyond virtual memory limits. Current usage: 118.5 MB of 1 G= B >>> physical memory used; 2.7 GB of >>> >>> 2.1 GB virtual memory used. Killing container. >>> Dump of the process-tree for container_1394160253524_0003_01_000003 : >>> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) >>> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE >>> |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c >>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue >>> -Dhadoop.metrics.log.level=3DWARN -Xmx2048m - >>> >>> Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcac= he/application_1394160253524_0003/container_1394160253524_0003_01_000003/tm= p >>> - >>> >>> Dlog4j.configuration=3Dcontainer-log4j.properties >>> -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/applicatio= n_1394160253524_0003/container_1394160253524_0003_01_000003 >>> >>> -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA >>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 >>> attempt_1394160253524_0003_m_000001_0 3 >>> >>> >>> 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/containe= r_1394160253524_0003_01_000003/stdout >>> >>> 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/containe= r_1394160253524_0003_01_000003/stderr >>> >>> |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028 >>> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue >>> -Dhadoop.metrics.log.level=3DWARN -Xmx2048m - >>> >>> Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcac= he/application_1394160253524_0003/container_1394160253524_0003_01_000003/tm= p >>> - >>> >>> Dlog4j.configuration=3Dcontainer-log4j.properties >>> -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/applicatio= n_1394160253524_0003/container_1394160253524_0003_01_000003 >>> >>> -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA >>> org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 >>> attempt_1394160253524_0003_m_000001_0 3 >>> >>> Container killed on request. Exit code is 143 >>> >>> at last, the task failed. >>> Thanks for any help! >>> >> >> > --=20 *Thanks & Regards* Unmesha Sreeveni U.B Junior Developer http://www.unmeshasreeveni.blogspot.in/ --047d7b67052f77071904f44cedd8 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
= Try to increase the memory for datanode = and see.This need to restart hadoop
export HADOOP_DATANODE_OPTS=3D"-X= mx10g"
This will s= et the heap to 10gb
You can also add this in start of hadoop-env.sh file


O= n Tue, Mar 11, 2014 at 9:02 AM, haihong lu <ung3210@gmail.com> wrote:
i have tried both of the me= thods you side, but the problem still exists. Thanks all the same. by the w= ay, my hadoop version is 2.2.0, so the parameter=C2=A0=C2=A0"m= apreduce.map.memory.mb =3D3072" added to mapred-site.xml maybe has no = effect. I have looked for this parameter in the document of hadoop, but did= not found it.


On Fri, Mar 7= , 2014 at 4:57 PM, shashwat shriparv <dwivedishashwat@gmail.com> wrote:
Check this out=C2=A0




Warm Regards_
= =E2=88=9E_
= Shashwat Shriparv
3D"http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9"3D"https://twitter.com/shriparv"3D"https://www.facebook.com/shriparv"3D"http://google.com/+ShashwatShriparv"3D"http://www.youtube.com/user/sShriparv/videos"3D"http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/"<= /font>= <= /font>



On Fri, Mar 7, 2014 at 12:04 PM, haihong= lu <ung3210@gmail.com> wrote:
Hi:

=C2=A0 =C2=A0 =C2=A0i have a proble= m when run Hibench with hadoop-2.2.0, the wrong message list as below
=

14/03/07 13:54:53 INFO mapreduce.Job: =C2=A0map 19= % reduce 0%
14/03/07 13:54:54 INFO mapreduce.Job: =C2=A0map 21% reduce 0%
14/= 03/07 14:00:26 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_0= 00020_0, Status : FAILED
Error: GC overhead limit exceeded
<= div> 14/03/07 14:00:27 INFO mapreduce.Job: =C2=A0map 20% reduce 0%
14/= 03/07 14:00:40 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_0= 00008_0, Status : FAILED
Error: GC overhead limit exceeded
<= div> 14/03/07 14:00:41 INFO mapreduce.Job: =C2=A0map 19% reduce 0%
14/= 03/07 14:00:59 INFO mapreduce.Job: =C2=A0map 20% reduce 0%
14/03/= 07 14:00:59 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_0000= 15_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:01:00 INFO map= reduce.Job: =C2=A0map 19% reduce 0%
14/03/07 14:01:03 INFO mapred= uce.Job: Task Id : attempt_1394160253524_0010_m_000023_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:01:11 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000026_0, Status : FAILE= D
Error: GC overhead limit exceeded
14/03/07 14:01:35 I= NFO mapreduce.Job: =C2=A0map 20% reduce 0%
14/03/07 14:01:35 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000019_0, Status : FAILED
Error: GC overhead limit exceede= d
14/03/07 14:01:36 INFO mapreduce.Job: =C2=A0map 19% reduce 0%
14/03/07 14:01:43 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000007_0, Status : FAILED
Error: GC overhead limit exceede= d
14/03/07 14:02:00 INFO mapreduce.Job: Task Id : attempt_1394160= 253524_0010_m_000000_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:01 INFO map= reduce.Job: =C2=A0map 18% reduce 0%
14/03/07 14:02:23 INFO mapred= uce.Job: Task Id : attempt_1394160253524_0010_m_000021_0, Status : FAILED
Error: Java heap space
14/03/07 14:02:24 INFO mapreduce.Job:= =C2=A0map 17% reduce 0%
14/03/07 14:02:31 INFO mapreduce.Job: = =C2=A0map 18% reduce 0%
14/03/07 14:02:33 INFO mapreduce.Job: Tas= k Id : attempt_1394160253524_0010_m_000029_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:34 INFO map= reduce.Job: =C2=A0map 17% reduce 0%
14/03/07 14:02:38 INFO mapred= uce.Job: Task Id : attempt_1394160253524_0010_m_000010_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:41 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000018_0, Status : FAILE= D
Error: GC overhead limit exceeded
14/03/07 14:02:43 I= NFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000014_0, Status = : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:47 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000028_0, Status : FAILE= D
Error: Java heap space
14/03/07 14:02:50 INFO mapredu= ce.Job: Task Id : attempt_1394160253524_0010_m_000002_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:51 INFO map= reduce.Job: =C2=A0map 16% reduce 0%
14/03/07 14:02:51 INFO mapred= uce.Job: Task Id : attempt_1394160253524_0010_m_000005_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:52 INFO map= reduce.Job: =C2=A0map 15% reduce 0%
14/03/07 14:02:55 INFO mapred= uce.Job: Task Id : attempt_1394160253524_0010_m_000006_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:57 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000027_0, Status : FAILE= D
Error: GC overhead limit exceeded
14/03/07 14:02:58 I= NFO mapreduce.Job: =C2=A0map 14% reduce 0%
14/03/07 14:03:04 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000009_0, Status : FAILED
Error: GC overhead limit exceede= d
14/03/07 14:03:05 INFO mapreduce.Job: Task Id : attempt_1394160= 253524_0010_m_000017_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:03:05 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000022_0, Status : FAILE= D
Error: GC overhead limit exceeded
14/03/07 14:03:06 I= NFO mapreduce.Job: =C2=A0map 12% reduce 0%
14/03/07 14:03:10 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000001_0, Status : FAILED
Error: GC overhead limit exceede= d
14/03/07 14:03:11 INFO mapreduce.Job: =C2=A0map 13% reduce 0%
14/03/07 14:03:11 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000024_0, Status : FAILED

and then i = add a parameter "mapred.child.java.opts" to the file "mapred= -site.xml",=C2=A0
=C2=A0 <property>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <= ;name>mapred.child.java.opts</name>
=C2=A0 =C2=A0 =C2=A0= =C2=A0 <value>-Xmx1024m</value>
=C2=A0 </property= >
then another error occurs as below

14/03/07 11:21:51 INFO mapreduce.Job: =C2=A0map 0%= reduce 0%
14/03/07 11:21:59 INFO mapreduce.Job: Task Id : attemp= t_1394160253524_0003_m_000002_0, Status : FAILED
Container [pid= =3D5592,containerID=3Dcontainer_1394160253524_0003_01_000004] is running be= yond virtual memory limits. Current usage: 112.6 MB of 1 GB physical memory= used; 2.7 GB of=C2=A0

2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1394160253524_0003_01_000004 :
|- PID PPID PGRPID SESS= ID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RS= SMEM_USAGE(PAGES) FULL_CMD_LINE
|- 5598 5592 5592 5592 (j= ava) 563 14 2778632192 28520 /usr/java/jdk1.7.0_45/bin/java -Djava.net.prefer= IPv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN -Xmx2048m -

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/ha= doop/appcache/application_1394160253524_0003/container_1394160253524_0003_0= 1_000004/tmp -

Dlog4j.configuration=3Dcontainer-lo= g4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs= /application_1394160253524_0003/container_1394160253524_0003_01_000004=C2= =A0

-Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.log= ger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attemp= t_1394160253524_0003_m_000002_0 4=C2=A0
|- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/ba= sh -c /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue -Dha= doop.metrics.log.level=3DWARN =C2=A0-Xmx2048m -

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercac= he/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0= 003_01_000004/tmp -

Dlog4j.configuration=3Dcontain= er-log4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/use= rlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004= =C2=A0

-Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.log= ger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attemp= t_1394160253524_0003_m_000002_0 4=C2=A0

1>/var/= log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139416025= 3524_0003_01_000004/stdout=C2=A0

2>/var/log/hadoop/yarn/userlogs/application_13941602= 53524_0003/container_1394160253524_0003_01_000004/stderr =C2=A0
<= br>
Container killed on request. Exit code is 143
14/03= /07 11:22:02 INFO mapreduce.Job: Task Id : attempt_1394160253524_0003_m_000= 001_0, Status : FAILED
Container [pid=3D5182,containerID=3Dcontainer_1394160253524_0003_01_00= 0003] is running beyond virtual memory limits. Current usage: 118.5 MB of 1= GB physical memory used; 2.7 GB of=C2=A0

2.1 GB v= irtual memory used. Killing container.
Dump of the process-tree for container_1394160253524_0003_01_000003 :<= /div>
|- PID PPID PGRPID S= ESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES)= RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 5182 4313 5182 5182 (b= ash) 0 0 108650496 303 /bin/bash -c /usr/java/jdk1.7.0_45/bin/java -Djava.n= et.preferIPv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN =C2=A0-Xmx2048m= -

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercac= he/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0= 003_01_000003/tmp -

Dlog4j.configuration=3Dcontain= er-log4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/use= rlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003= =C2=A0

-Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.log= ger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attemp= t_1394160253524_0003_m_000001_0 3=C2=A0

1>/var/= log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139416025= 3524_0003_01_000003/stdout=C2=A0

2>/var/log/hadoop/yarn/userlogs/application_13941602= 53524_0003/container_1394160253524_0003_01_000003/stderr =C2=A0
<= span style=3D"white-space:pre-wrap"> |- 5187 5182 5182 5182 (java) 6= 16 19 = 2783928320 30028 /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4St= ack=3Dtrue -Dhadoop.metrics.log.level=3DWARN -Xmx2048m -

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercac= he/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0= 003_01_000003/tmp -

Dlog4j.configuration=3Dcontain= er-log4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/use= rlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003= =C2=A0

-Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.log= ger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attemp= t_1394160253524_0003_m_000001_0 3=C2=A0

Container = killed on request. Exit code is 143

at last, the task failed.=C2=A0
Thanks = for any help!





--
=
Thanks & Regards

Unmesha Sreeveni U.B
Junior Developer

http:= //www.unmeshasreeveni.blogspot.in/


--047d7b67052f77071904f44cedd8--