Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 845B810ECE for ; Fri, 7 Mar 2014 08:40:39 +0000 (UTC) Received: (qmail 63475 invoked by uid 500); 7 Mar 2014 08:40:31 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 63394 invoked by uid 500); 7 Mar 2014 08:40:29 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 63379 invoked by uid 99); 7 Mar 2014 08:40:28 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Mar 2014 08:40:28 +0000 X-ASF-Spam-Status: No, hits=3.2 required=5.0 tests=HTML_MESSAGE,MSGID_MULTIPLE_AT,SPF_PASS,UNPARSEABLE_RELAY X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of liyin.liangly@aliyun-inc.com designates 42.121.48.7 as permitted sender) Received: from [42.121.48.7] (HELO out548-7.mail.aliyun.com) (42.121.48.7) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Mar 2014 08:40:20 +0000 Received: from LianglyPC(mailfrom:liyin.liangly@aliyun-inc.com ip:42.120.74.241) by smtp.aliyun-inc.com(127.0.0.1); Fri, 07 Mar 2014 16:39:19 +0800 From: =?gb2312?B?wbrA7tOh?= To: References: In-Reply-To: Subject: =?gb2312?B?tPC4tDogR0Mgb3ZlcmhlYWQgbGltaXQgZXhjZWVkZWQ=?= Date: Fri, 7 Mar 2014 16:39:19 +0800 Message-ID: <00cd01cf39e0$bb4eb9b0$31ec2d10$@liangly@aliyun-inc.com> MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_NextPart_000_00CE_01CF3A23.C971F9B0" X-Mailer: Microsoft Office Outlook 12.0 Thread-Index: Ac85z1JeLIKnkorTQNCPcGto4ojW2gAEDgyw Content-Language: zh-cn X-Virus-Checked: Checked by ClamAV on apache.org This is a multi-part message in MIME format. ------=_NextPart_000_00CE_01CF3A23.C971F9B0 Content-Type: text/plain; charset="gb2312" Content-Transfer-Encoding: quoted-printable By default, if your mapred.child.java.opts=3D-Xmx1024m, the memory limit = for your task container is 2GB. If the memory your map used is more than = 2GB, you map container will be killed by NodeManager. You can and a parameter mapreduce.map.memory.mb =3D3072(3GB) to try to = fix this problem. =20 Liyin Liang =B7=A2=BC=FE=C8=CB: haihong lu [mailto:ung3210@gmail.com]=20 =B7=A2=CB=CD=CA=B1=BC=E4: 2014=C4=EA3=D4=C27=C8=D5 14:34 =CA=D5=BC=FE=C8=CB: user@hadoop.apache.org =D6=F7=CC=E2: GC overhead limit exceeded =20 Hi: =20 i have a problem when run Hibench with hadoop-2.2.0, the wrong = message list as below =20 14/03/07 13:54:53 INFO mapreduce.Job: map 19% reduce 0% 14/03/07 13:54:54 INFO mapreduce.Job: map 21% reduce 0% 14/03/07 14:00:26 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000020_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:00:27 INFO mapreduce.Job: map 20% reduce 0% 14/03/07 14:00:40 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000008_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:00:41 INFO mapreduce.Job: map 19% reduce 0% 14/03/07 14:00:59 INFO mapreduce.Job: map 20% reduce 0% 14/03/07 14:00:59 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000015_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:01:00 INFO mapreduce.Job: map 19% reduce 0% 14/03/07 14:01:03 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000023_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:01:11 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000026_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:01:35 INFO mapreduce.Job: map 20% reduce 0% 14/03/07 14:01:35 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000019_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:01:36 INFO mapreduce.Job: map 19% reduce 0% 14/03/07 14:01:43 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000007_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:00 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000000_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:01 INFO mapreduce.Job: map 18% reduce 0% 14/03/07 14:02:23 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000021_0, Status : FAILED Error: Java heap space 14/03/07 14:02:24 INFO mapreduce.Job: map 17% reduce 0% 14/03/07 14:02:31 INFO mapreduce.Job: map 18% reduce 0% 14/03/07 14:02:33 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000029_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:34 INFO mapreduce.Job: map 17% reduce 0% 14/03/07 14:02:38 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000010_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:41 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000018_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:43 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000014_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:47 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000028_0, Status : FAILED Error: Java heap space 14/03/07 14:02:50 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000002_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:51 INFO mapreduce.Job: map 16% reduce 0% 14/03/07 14:02:51 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000005_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:52 INFO mapreduce.Job: map 15% reduce 0% 14/03/07 14:02:55 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000006_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:57 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000027_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:58 INFO mapreduce.Job: map 14% reduce 0% 14/03/07 14:03:04 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000009_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:03:05 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000017_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:03:05 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000022_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:03:06 INFO mapreduce.Job: map 12% reduce 0% 14/03/07 14:03:10 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000001_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:03:11 INFO mapreduce.Job: map 13% reduce 0% 14/03/07 14:03:11 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000024_0, Status : FAILED =20 and then i add a parameter "mapred.child.java.opts" to the file "mapred-site.xml",=20 mapred.child.java.opts -Xmx1024m then another error occurs as below =20 14/03/07 11:21:51 INFO mapreduce.Job: map 0% reduce 0% 14/03/07 11:21:59 INFO mapreduce.Job: Task Id : attempt_1394160253524_0003_m_000002_0, Status : FAILED Container = [pid=3D5592,containerID=3Dcontainer_1394160253524_0003_01_000004] is running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB physical memory used; 2.7 GB of=20 =20 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1394160253524_0003_01_000004 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520 = /usr/java/jdk1. 7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN -Xmx2048m - =20 Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache= /appl ication_1394160253524_0003/container_1394160253524_0003_01_000004/tmp - =20 Dlog4j.configuration=3Dcontainer-log4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/application_= 13941 60253524_0003/container_1394160253524_0003_01_000004=20 =20 -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attempt_1394160253524_0003_m_000002_0 4=20 |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN -Xmx2048m - =20 Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache= /appl ication_1394160253524_0003/container_1394160253524_0003_01_000004/tmp - =20 Dlog4j.configuration=3Dcontainer-log4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/application_= 13941 60253524_0003/container_1394160253524_0003_01_000004=20 =20 -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attempt_1394160253524_0003_m_000002_0 4=20 =20 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_= 139 4160253524_0003_01_000004/stdout=20 =20 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_= 139 4160253524_0003_01_000004/stderr =20 =20 Container killed on request. Exit code is 143 14/03/07 11:22:02 INFO mapreduce.Job: Task Id : attempt_1394160253524_0003_m_000001_0, Status : FAILED Container = [pid=3D5182,containerID=3Dcontainer_1394160253524_0003_01_000003] is running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB physical memory used; 2.7 GB of=20 =20 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1394160253524_0003_01_000003 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN -Xmx2048m - =20 Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache= /appl ication_1394160253524_0003/container_1394160253524_0003_01_000003/tmp - =20 Dlog4j.configuration=3Dcontainer-log4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/application_= 13941 60253524_0003/container_1394160253524_0003_01_000003=20 =20 -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attempt_1394160253524_0003_m_000001_0 3=20 =20 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_= 139 4160253524_0003_01_000003/stdout=20 =20 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_= 139 4160253524_0003_01_000003/stderr =20 |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028 = /usr/java/jdk1. 7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN -Xmx2048m - =20 Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache= /appl ication_1394160253524_0003/container_1394160253524_0003_01_000003/tmp - =20 Dlog4j.configuration=3Dcontainer-log4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/application_= 13941 60253524_0003/container_1394160253524_0003_01_000003=20 =20 -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attempt_1394160253524_0003_m_000001_0 3=20 =20 Container killed on request. Exit code is 143 =20 at last, the task failed.=20 Thanks for any help! ------=_NextPart_000_00CE_01CF3A23.C971F9B0 Content-Type: text/html; charset="gb2312" Content-Transfer-Encoding: quoted-printable

By default, if your mapred.child.java.opts=3D-Xmx1024m, the memory = limit for your task container is 2GB. If the memory your map used is = more than 2GB, you map container will be killed by = NodeManager.

You can and a  parameter mapreduce.map.memory.mb =3D3072(3GB) to = try to fix this problem.

 

Liyin Liang

=B7=A2=BC=FE=C8=CB: haihong lu = [mailto:ung3210@gmail.com]
=B7=A2=CB=CD=CA=B1=BC= =E4: 2014=C4=EA3=D4=C27=C8=D5 14:34
=CA=D5=BC=FE=C8=CB: = user@hadoop.apache.org
=D6=F7=CC=E2: GC overhead limit = exceeded

 

Hi:

 

     i have a = problem when run Hibench with hadoop-2.2.0, the wrong message list as = below

 

14/03/07 13:54:53 INFO = mapreduce.Job:  map 19% reduce = 0%

14/03/07 13:54:54 INFO mapreduce.Job:  map 21% reduce = 0%

14/03/07 14:00:26 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000020_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:00:27 INFO mapreduce.Job:  map 20% reduce = 0%

14/03/07 14:00:40 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000008_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:00:41 INFO mapreduce.Job:  map 19% reduce = 0%

14/03/07 14:00:59 INFO mapreduce.Job:  map 20% reduce = 0%

14/03/07 14:00:59 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000015_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:01:00 INFO mapreduce.Job:  map 19% reduce = 0%

14/03/07 14:01:03 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000023_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:01:11 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000026_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:01:35 INFO mapreduce.Job:  map 20% reduce = 0%

14/03/07 14:01:35 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000019_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:01:36 INFO mapreduce.Job:  map 19% reduce = 0%

14/03/07 14:01:43 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000007_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:02:00 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000000_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:02:01 INFO mapreduce.Job:  map 18% reduce = 0%

14/03/07 14:02:23 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000021_0, Status : = FAILED

Error: Java heap space

14/03/07 14:02:24 INFO = mapreduce.Job:  map 17% reduce = 0%

14/03/07 14:02:31 INFO mapreduce.Job:  map 18% reduce = 0%

14/03/07 14:02:33 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000029_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:02:34 INFO mapreduce.Job:  map 17% reduce = 0%

14/03/07 14:02:38 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000010_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:02:41 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000018_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:02:43 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000014_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:02:47 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000028_0, Status : = FAILED

Error: Java heap space

14/03/07 14:02:50 INFO = mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000002_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:02:51 INFO mapreduce.Job:  map 16% reduce = 0%

14/03/07 14:02:51 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000005_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:02:52 INFO mapreduce.Job:  map 15% reduce = 0%

14/03/07 14:02:55 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000006_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:02:57 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000027_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:02:58 INFO mapreduce.Job:  map 14% reduce = 0%

14/03/07 14:03:04 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000009_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:03:05 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000017_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:03:05 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000022_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:03:06 INFO mapreduce.Job:  map 12% reduce = 0%

14/03/07 14:03:10 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000001_0, Status : = FAILED

Error: GC overhead limit = exceeded

14/03/07 14:03:11 INFO mapreduce.Job:  map 13% reduce = 0%

14/03/07 14:03:11 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0010_m_000024_0, Status : = FAILED

 

and then i add a parameter = "mapred.child.java.opts" to the file = "mapred-site.xml", 

<= p class=3DMsoNormal>  = <property>

        = <name>mapred.child.java.opts</name>

      =   = <value>-Xmx1024m</value>

  = </property>

then another error occurs as = below

 

14/03/07 11:21:51 INFO = mapreduce.Job:  map 0% reduce 0%

14/03/07 11:21:59 INFO = mapreduce.Job: Task Id : attempt_1394160253524_0003_m_000002_0, Status : = FAILED

Container = [pid=3D5592,containerID=3Dcontainer_1394160253524_0003_01_000004] is = running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB = physical memory used; 2.7 GB of 

 

2.1 GB virtual memory used. Killing = container.

Dump of the process-tree for = container_1394160253524_0003_01_000004 = :

       |- PID PPID PGRPID = SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) = VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) = FULL_CMD_LINE

       |- 5598 5592 5592 5592 = (java) 563 14 2778632192 28520 /usr/java/jdk1.7.0_45/bin/java = -Djava.net.preferIPv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN = -Xmx2048m -

 

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/ha= doop/appcache/application_1394160253524_0003/container_1394160253524_0003= _01_000004/tmp -

 

Dlog4j.configuration=3Dcontainer-log4j.properties = -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/application_= 1394160253524_0003/container_1394160253524_0003_01_000004 

 

-Dyarn.app.container.log.filesize=3D0 = -Dhadoop.root.logger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild = 10.239.44.34 46837 attempt_1394160253524_0003_m_000002_0 = 4 

       |- 5592 4562 5592 5592 = (bash) 0 0 108650496 300 /bin/bash -c /usr/java/jdk1.7.0_45/bin/java = -Djava.net.preferIPv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN =  -Xmx2048m -

 

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/ha= doop/appcache/application_1394160253524_0003/container_1394160253524_0003= _01_000004/tmp -

 

Dlog4j.configuration=3Dcontainer-log4j.properties = -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/application_= 1394160253524_0003/container_1394160253524_0003_01_000004 

 

-Dyarn.app.container.log.filesize=3D0 = -Dhadoop.root.logger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild = 10.239.44.34 46837 attempt_1394160253524_0003_m_000002_0 = 4 

 

1>/var/log/hadoop/yarn/userlogs/application_1394160253524= _0003/container_1394160253524_0003_01_000004/stdout 

 

2>/var/log/hadoop/yarn/userlogs/application_1394160253524= _0003/container_1394160253524_0003_01_000004/stderr =  

 

Container killed on request. Exit = code is 143

14/03/07 11:22:02 INFO mapreduce.Job: Task Id : = attempt_1394160253524_0003_m_000001_0, Status : = FAILED

Container = [pid=3D5182,containerID=3Dcontainer_1394160253524_0003_01_000003] is = running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB = physical memory used; 2.7 GB of 

 

2.1 GB virtual memory used. Killing = container.

Dump of the process-tree for = container_1394160253524_0003_01_000003 = :

       |- PID PPID PGRPID = SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) = VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) = FULL_CMD_LINE

       |- 5182 4313 5182 5182 = (bash) 0 0 108650496 303 /bin/bash -c /usr/java/jdk1.7.0_45/bin/java = -Djava.net.preferIPv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN =  -Xmx2048m -

 

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/ha= doop/appcache/application_1394160253524_0003/container_1394160253524_0003= _01_000003/tmp -

 

Dlog4j.configuration=3Dcontainer-log4j.properties = -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/application_= 1394160253524_0003/container_1394160253524_0003_01_000003 

 

-Dyarn.app.container.log.filesize=3D0 = -Dhadoop.root.logger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild = 10.239.44.34 46837 attempt_1394160253524_0003_m_000001_0 = 3 

 

1>/var/log/hadoop/yarn/userlogs/application_1394160253524= _0003/container_1394160253524_0003_01_000003/stdout 

 

2>/var/log/hadoop/yarn/userlogs/application_1394160253524= _0003/container_1394160253524_0003_01_000003/stderr =  

       |- 5187 5182 5182 5182 = (java) 616 19 2783928320 30028 /usr/java/jdk1.7.0_45/bin/java = -Djava.net.preferIPv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN = -Xmx2048m -

 

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/ha= doop/appcache/application_1394160253524_0003/container_1394160253524_0003= _01_000003/tmp -

 

Dlog4j.configuration=3Dcontainer-log4j.properties = -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs/application_= 1394160253524_0003/container_1394160253524_0003_01_000003 

 

-Dyarn.app.container.log.filesize=3D0 = -Dhadoop.root.logger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild = 10.239.44.34 46837 attempt_1394160253524_0003_m_000001_0 = 3 

 

Container killed on request. Exit = code is 143

 

at last, the task = failed. 

Thanks for any = help!

------=_NextPart_000_00CE_01CF3A23.C971F9B0--