Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D0B6110F41 for ; Fri, 7 Mar 2014 08:58:22 +0000 (UTC) Received: (qmail 6707 invoked by uid 500); 7 Mar 2014 08:58:15 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 5819 invoked by uid 500); 7 Mar 2014 08:58:13 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 5811 invoked by uid 99); 7 Mar 2014 08:58:12 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Mar 2014 08:58:12 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,T_REMOTE_IMAGE X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of dwivedishashwat@gmail.com designates 209.85.217.179 as permitted sender) Received: from [209.85.217.179] (HELO mail-lb0-f179.google.com) (209.85.217.179) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Mar 2014 08:58:05 +0000 Received: by mail-lb0-f179.google.com with SMTP id p9so2552230lbv.38 for ; Fri, 07 Mar 2014 00:57:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=fzJgKUbdg4wVMkItAcXBSgTsX49nuBHUqnLPA8KlEA8=; b=nwlbZ2Iy0g+Zjwkyelj3p+KEOB1isvHXIubiWMK0E6uFxquN4K8xlO0K70cEHWkRXz gDXDYpwbLoOTM+pElstnzX4s2zL0/1zXmiDKKRUYJ/Vj4xjNVp/s4ALFUKx3FtyKZMFv vxD0whTUWyQLXwQZ6doNBkrXGf0bE6hvjqNGcFIoromLPT9R51+bx+MzltSO2zSgE41H jElxNKwIgAUUHb3QDQO4is64ARt+3Xgx/6vo4KrjzN8EJenAmH45zcZsOjO5L2rfPsdn wz3MyNE0Q6ynGHr9bT1eq1aynHmzzmDT091fVxfbjGT8XfVMFz+vsy0CUdxW2Nk1QvCp +CZw== X-Received: by 10.152.4.68 with SMTP id i4mr10970353lai.8.1394182665174; Fri, 07 Mar 2014 00:57:45 -0800 (PST) MIME-Version: 1.0 Received: by 10.114.75.137 with HTTP; Fri, 7 Mar 2014 00:57:25 -0800 (PST) In-Reply-To: References: From: shashwat shriparv Date: Fri, 7 Mar 2014 14:27:25 +0530 Message-ID: Subject: Re: GC overhead limit exceeded To: user , ung3210@gmail.com Content-Type: multipart/alternative; boundary=089e01493d468284e804f40072cb X-Virus-Checked: Checked by ClamAV on apache.org --089e01493d468284e804f40072cb Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Check this out http://ask.gopivotal.com/hc/en-us/articles/201850408-Namenode-fails-with-ja= va-lang-OutOfMemoryError-GC-overhead-limit-exceeded * Warm Regards_**=E2=88=9E_* * Shashwat Shriparv* [image: http://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9][image: https://twitter.com/shriparv] [image: https://www.facebook.com/shriparv] [imag= e: http://google.com/+ShashwatShriparv] [image: http://www.youtube.com/user/sShriparv/videos][image: http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/] On Fri, Mar 7, 2014 at 12:04 PM, haihong lu wrote: > Hi: > > i have a problem when run Hibench with hadoop-2.2.0, the wrong > message list as below > > 14/03/07 13:54:53 INFO mapreduce.Job: map 19% reduce 0% > 14/03/07 13:54:54 INFO mapreduce.Job: map 21% reduce 0% > 14/03/07 14:00:26 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000020_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:00:27 INFO mapreduce.Job: map 20% reduce 0% > 14/03/07 14:00:40 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000008_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:00:41 INFO mapreduce.Job: map 19% reduce 0% > 14/03/07 14:00:59 INFO mapreduce.Job: map 20% reduce 0% > 14/03/07 14:00:59 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000015_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:01:00 INFO mapreduce.Job: map 19% reduce 0% > 14/03/07 14:01:03 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000023_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:01:11 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000026_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:01:35 INFO mapreduce.Job: map 20% reduce 0% > 14/03/07 14:01:35 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000019_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:01:36 INFO mapreduce.Job: map 19% reduce 0% > 14/03/07 14:01:43 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000007_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:02:00 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000000_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:02:01 INFO mapreduce.Job: map 18% reduce 0% > 14/03/07 14:02:23 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000021_0, Status : FAILED > Error: Java heap space > 14/03/07 14:02:24 INFO mapreduce.Job: map 17% reduce 0% > 14/03/07 14:02:31 INFO mapreduce.Job: map 18% reduce 0% > 14/03/07 14:02:33 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000029_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:02:34 INFO mapreduce.Job: map 17% reduce 0% > 14/03/07 14:02:38 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000010_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:02:41 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000018_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:02:43 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000014_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:02:47 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000028_0, Status : FAILED > Error: Java heap space > 14/03/07 14:02:50 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000002_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:02:51 INFO mapreduce.Job: map 16% reduce 0% > 14/03/07 14:02:51 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000005_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:02:52 INFO mapreduce.Job: map 15% reduce 0% > 14/03/07 14:02:55 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000006_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:02:57 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000027_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:02:58 INFO mapreduce.Job: map 14% reduce 0% > 14/03/07 14:03:04 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000009_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:03:05 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000017_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:03:05 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000022_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:03:06 INFO mapreduce.Job: map 12% reduce 0% > 14/03/07 14:03:10 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000001_0, Status : FAILED > Error: GC overhead limit exceeded > 14/03/07 14:03:11 INFO mapreduce.Job: map 13% reduce 0% > 14/03/07 14:03:11 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0010_m_000024_0, Status : FAILED > > and then i add a parameter "mapred.child.java.opts" to the file > "mapred-site.xml", > > mapred.child.java.opts > -Xmx1024m > > then another error occurs as below > > 14/03/07 11:21:51 INFO mapreduce.Job: map 0% reduce 0% > 14/03/07 11:21:59 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0003_m_000002_0, Status : FAILED > Container [pid=3D5592,containerID=3Dcontainer_1394160253524_0003_01_00000= 4] is > running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB > physical memory used; 2.7 GB of > > 2.1 GB virtual memory used. Killing container. > Dump of the process-tree for container_1394160253524_0003_01_000004 : > |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE > |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520 > /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue > -Dhadoop.metrics.log.level=3DWARN -Xmx2048m - > > Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache= /application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp > - > > Dlog4j.configuration=3Dcontainer-log4j.properties > -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/application_= 1394160253524_0003/container_1394160253524_0003_01_000004 > > -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA > org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 > attempt_1394160253524_0003_m_000002_0 4 > |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c > /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue > -Dhadoop.metrics.log.level=3DWARN -Xmx2048m - > > Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache= /application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp > - > > Dlog4j.configuration=3Dcontainer-log4j.properties > -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/application_= 1394160253524_0003/container_1394160253524_0003_01_000004 > > -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA > org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 > attempt_1394160253524_0003_m_000002_0 4 > > > 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_= 1394160253524_0003_01_000004/stdout > > 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_= 1394160253524_0003_01_000004/stderr > > > Container killed on request. Exit code is 143 > 14/03/07 11:22:02 INFO mapreduce.Job: Task Id : > attempt_1394160253524_0003_m_000001_0, Status : FAILED > Container [pid=3D5182,containerID=3Dcontainer_1394160253524_0003_01_00000= 3] is > running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB > physical memory used; 2.7 GB of > > 2.1 GB virtual memory used. Killing container. > Dump of the process-tree for container_1394160253524_0003_01_000003 : > |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE > |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c > /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue > -Dhadoop.metrics.log.level=3DWARN -Xmx2048m - > > Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache= /application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp > - > > Dlog4j.configuration=3Dcontainer-log4j.properties > -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/application_= 1394160253524_0003/container_1394160253524_0003_01_000003 > > -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA > org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 > attempt_1394160253524_0003_m_000001_0 3 > > > 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_= 1394160253524_0003_01_000003/stdout > > 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_= 1394160253524_0003_01_000003/stderr > > |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028 > /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue > -Dhadoop.metrics.log.level=3DWARN -Xmx2048m - > > Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache= /application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp > - > > Dlog4j.configuration=3Dcontainer-log4j.properties > -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/application_= 1394160253524_0003/container_1394160253524_0003_01_000003 > > -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA > org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 > attempt_1394160253524_0003_m_000001_0 3 > > Container killed on request. Exit code is 143 > > at last, the task failed. > Thanks for any help! > --089e01493d468284e804f40072cb Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Check this out=C2=A0




Warm Regards_
= =E2=88=9E_
= Shashwat Shrip= arv
<= font size=3D"4">
3D"http://www.linke=3D"ht=3D"https://www.f=3D"http://google.c=3D"http://www.you=<= font size=3D"4">3D"http://profil== <= /span>= <= /span>


On Fri, Mar 7, 2014 at 12:04 PM, haihong= lu <ung3210@gmail.com> wrote:
Hi:

=C2=A0 =C2=A0 =C2=A0i have a proble= m when run Hibench with hadoop-2.2.0, the wrong message list as below
=

14/03/07 13:54:53 INFO mapreduce.Job: =C2=A0map 19= % reduce 0%
14/03/07 13:54:54 INFO mapreduce.Job: =C2=A0map 21% reduce 0%
14/= 03/07 14:00:26 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_0= 00020_0, Status : FAILED
Error: GC overhead limit exceeded
<= div> 14/03/07 14:00:27 INFO mapreduce.Job: =C2=A0map 20% reduce 0%
14/= 03/07 14:00:40 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_0= 00008_0, Status : FAILED
Error: GC overhead limit exceeded
<= div> 14/03/07 14:00:41 INFO mapreduce.Job: =C2=A0map 19% reduce 0%
14/= 03/07 14:00:59 INFO mapreduce.Job: =C2=A0map 20% reduce 0%
14/03/= 07 14:00:59 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_0000= 15_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:01:00 INFO map= reduce.Job: =C2=A0map 19% reduce 0%
14/03/07 14:01:03 INFO mapred= uce.Job: Task Id : attempt_1394160253524_0010_m_000023_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:01:11 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000026_0, Status : FAILE= D
Error: GC overhead limit exceeded
14/03/07 14:01:35 I= NFO mapreduce.Job: =C2=A0map 20% reduce 0%
14/03/07 14:01:35 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000019_0, Status : FAILED
Error: GC overhead limit exceede= d
14/03/07 14:01:36 INFO mapreduce.Job: =C2=A0map 19% reduce 0%
14/03/07 14:01:43 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000007_0, Status : FAILED
Error: GC overhead limit exceede= d
14/03/07 14:02:00 INFO mapreduce.Job: Task Id : attempt_1394160= 253524_0010_m_000000_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:01 INFO map= reduce.Job: =C2=A0map 18% reduce 0%
14/03/07 14:02:23 INFO mapred= uce.Job: Task Id : attempt_1394160253524_0010_m_000021_0, Status : FAILED
Error: Java heap space
14/03/07 14:02:24 INFO mapreduce.Job:= =C2=A0map 17% reduce 0%
14/03/07 14:02:31 INFO mapreduce.Job: = =C2=A0map 18% reduce 0%
14/03/07 14:02:33 INFO mapreduce.Job: Tas= k Id : attempt_1394160253524_0010_m_000029_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:34 INFO map= reduce.Job: =C2=A0map 17% reduce 0%
14/03/07 14:02:38 INFO mapred= uce.Job: Task Id : attempt_1394160253524_0010_m_000010_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:41 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000018_0, Status : FAILE= D
Error: GC overhead limit exceeded
14/03/07 14:02:43 I= NFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000014_0, Status = : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:47 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000028_0, Status : FAILE= D
Error: Java heap space
14/03/07 14:02:50 INFO mapredu= ce.Job: Task Id : attempt_1394160253524_0010_m_000002_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:51 INFO map= reduce.Job: =C2=A0map 16% reduce 0%
14/03/07 14:02:51 INFO mapred= uce.Job: Task Id : attempt_1394160253524_0010_m_000005_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:52 INFO map= reduce.Job: =C2=A0map 15% reduce 0%
14/03/07 14:02:55 INFO mapred= uce.Job: Task Id : attempt_1394160253524_0010_m_000006_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:57 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000027_0, Status : FAILE= D
Error: GC overhead limit exceeded
14/03/07 14:02:58 I= NFO mapreduce.Job: =C2=A0map 14% reduce 0%
14/03/07 14:03:04 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000009_0, Status : FAILED
Error: GC overhead limit exceede= d
14/03/07 14:03:05 INFO mapreduce.Job: Task Id : attempt_1394160= 253524_0010_m_000017_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:03:05 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000022_0, Status : FAILE= D
Error: GC overhead limit exceeded
14/03/07 14:03:06 I= NFO mapreduce.Job: =C2=A0map 12% reduce 0%
14/03/07 14:03:10 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000001_0, Status : FAILED
Error: GC overhead limit exceede= d
14/03/07 14:03:11 INFO mapreduce.Job: =C2=A0map 13% reduce 0%
14/03/07 14:03:11 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000024_0, Status : FAILED

and then i = add a parameter "mapred.child.java.opts" to the file "mapred= -site.xml",=C2=A0
=C2=A0 <property>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <= ;name>mapred.child.java.opts</name>
=C2=A0 =C2=A0 =C2=A0= =C2=A0 <value>-Xmx1024m</value>
=C2=A0 </property= >
then another error occurs as below

14/03/07 11:21:51 INFO mapreduce.Job: =C2=A0map 0%= reduce 0%
14/03/07 11:21:59 INFO mapreduce.Job: Task Id : attemp= t_1394160253524_0003_m_000002_0, Status : FAILED
Container [pid= =3D5592,containerID=3Dcontainer_1394160253524_0003_01_000004] is running be= yond virtual memory limits. Current usage: 112.6 MB of 1 GB physical memory= used; 2.7 GB of=C2=A0

2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1394160253524_0003_01_000004 :
|- PID PPID PGRPID SESS= ID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RS= SMEM_USAGE(PAGES) FULL_CMD_LINE
|- 5598 5592 5592 5592 (j= ava) 563 14 2778632192 28520 /usr/java/jdk1.7.0_45/bin/java -Djava.net.prefer= IPv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN -Xmx2048m -

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/ha= doop/appcache/application_1394160253524_0003/container_1394160253524_0003_0= 1_000004/tmp -

Dlog4j.configuration=3Dcontainer-lo= g4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs= /application_1394160253524_0003/container_1394160253524_0003_01_000004=C2= =A0

-Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.log= ger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attemp= t_1394160253524_0003_m_000002_0 4=C2=A0
|- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/ba= sh -c /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue -Dha= doop.metrics.log.level=3DWARN =C2=A0-Xmx2048m -

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercac= he/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0= 003_01_000004/tmp -

Dlog4j.configuration=3Dcontain= er-log4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/use= rlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004= =C2=A0

-Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.log= ger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attemp= t_1394160253524_0003_m_000002_0 4=C2=A0

1>/var/= log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139416025= 3524_0003_01_000004/stdout=C2=A0

2>/var/log/hadoop/yarn/userlogs/application_13941602= 53524_0003/container_1394160253524_0003_01_000004/stderr =C2=A0
<= br>
Container killed on request. Exit code is 143
14/03= /07 11:22:02 INFO mapreduce.Job: Task Id : attempt_1394160253524_0003_m_000= 001_0, Status : FAILED
Container [pid=3D5182,containerID=3Dcontainer_1394160253524_0003_01_00= 0003] is running beyond virtual memory limits. Current usage: 118.5 MB of 1= GB physical memory used; 2.7 GB of=C2=A0

2.1 GB v= irtual memory used. Killing container.
Dump of the process-tree for container_1394160253524_0003_01_000003 :<= /div>
|- PID PPID PGRPID S= ESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES)= RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 5182 4313 5182 5182 (b= ash) 0 0 108650496 303 /bin/bash -c /usr/java/jdk1.7.0_45/bin/java -Djava.n= et.preferIPv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN =C2=A0-Xmx2048m= -

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercac= he/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0= 003_01_000003/tmp -

Dlog4j.configuration=3Dcontain= er-log4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/use= rlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003= =C2=A0

-Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.log= ger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attemp= t_1394160253524_0003_m_000001_0 3=C2=A0

1>/var/= log/hadoop/yarn/userlogs/application_1394160253524_0003/container_139416025= 3524_0003_01_000003/stdout=C2=A0

2>/var/log/hadoop/yarn/userlogs/application_13941602= 53524_0003/container_1394160253524_0003_01_000003/stderr =C2=A0
<= span style=3D"white-space:pre-wrap"> |- 5187 5182 5182 5182 (java) 6= 16 19 = 2783928320 30028 /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4St= ack=3Dtrue -Dhadoop.metrics.log.level=3DWARN -Xmx2048m -

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercac= he/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0= 003_01_000003/tmp -

Dlog4j.configuration=3Dcontain= er-log4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/use= rlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003= =C2=A0

-Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.log= ger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attemp= t_1394160253524_0003_m_000001_0 3=C2=A0

Container = killed on request. Exit code is 143

at last, the task failed.=C2=A0
Thanks = for any help!

--089e01493d468284e804f40072cb--