Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1D3CA10BCE for ; Fri, 7 Mar 2014 06:34:43 +0000 (UTC) Received: (qmail 83122 invoked by uid 500); 7 Mar 2014 06:34:34 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 82638 invoked by uid 500); 7 Mar 2014 06:34:31 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 82630 invoked by uid 99); 7 Mar 2014 06:34:30 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Mar 2014 06:34:30 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of ung3210@gmail.com designates 74.125.82.65 as permitted sender) Received: from [74.125.82.65] (HELO mail-wg0-f65.google.com) (74.125.82.65) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Mar 2014 06:34:24 +0000 Received: by mail-wg0-f65.google.com with SMTP id x13so1533130wgg.0 for ; Thu, 06 Mar 2014 22:34:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=lN3tzr9vSkKUVdHoV/z2hJ8oPhZYsMnGWcnkaWiDrDY=; b=V7oJ65qsvl07aGDxHgSigkAA3rhbRrwwFU0VUWnxQ910Qp62ANfTJqzgEbcdLmPXwu HogkdWOSxSQubyZpp3Inh73hHUwVx27DU3vOATUklj7YKOhdASNIJyvKXx+s2LZhGHv8 FhKNzcYmNxSWPSEebE0l+fFrVumESZcUwr5Y4jnEfhAuNltQJ1fiE9KcraHF2YeqOBLL SuPRzxQPpgOR/vnu1gw7TXd0SZawQvPZW48tdzTggaw0PrhM0PavyZVT1/7UZrt7OhUS OAsYsLpVUwoS5Xf8VNgqsVQx8sc+0w5fs36Q34l+x6RPJpwtjNox/O260+AtzmcTdB/d 7k4Q== MIME-Version: 1.0 X-Received: by 10.194.24.35 with SMTP id r3mr16208933wjf.68.1394174043333; Thu, 06 Mar 2014 22:34:03 -0800 (PST) Received: by 10.194.8.70 with HTTP; Thu, 6 Mar 2014 22:34:03 -0800 (PST) Date: Fri, 7 Mar 2014 14:34:03 +0800 Message-ID: Subject: GC overhead limit exceeded From: haihong lu To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b5d8d9d9ba88804f3fe7069 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b5d8d9d9ba88804f3fe7069 Content-Type: text/plain; charset=ISO-8859-1 Hi: i have a problem when run Hibench with hadoop-2.2.0, the wrong message list as below 14/03/07 13:54:53 INFO mapreduce.Job: map 19% reduce 0% 14/03/07 13:54:54 INFO mapreduce.Job: map 21% reduce 0% 14/03/07 14:00:26 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000020_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:00:27 INFO mapreduce.Job: map 20% reduce 0% 14/03/07 14:00:40 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000008_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:00:41 INFO mapreduce.Job: map 19% reduce 0% 14/03/07 14:00:59 INFO mapreduce.Job: map 20% reduce 0% 14/03/07 14:00:59 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000015_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:01:00 INFO mapreduce.Job: map 19% reduce 0% 14/03/07 14:01:03 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000023_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:01:11 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000026_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:01:35 INFO mapreduce.Job: map 20% reduce 0% 14/03/07 14:01:35 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000019_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:01:36 INFO mapreduce.Job: map 19% reduce 0% 14/03/07 14:01:43 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000007_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:00 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000000_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:01 INFO mapreduce.Job: map 18% reduce 0% 14/03/07 14:02:23 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000021_0, Status : FAILED Error: Java heap space 14/03/07 14:02:24 INFO mapreduce.Job: map 17% reduce 0% 14/03/07 14:02:31 INFO mapreduce.Job: map 18% reduce 0% 14/03/07 14:02:33 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000029_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:34 INFO mapreduce.Job: map 17% reduce 0% 14/03/07 14:02:38 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000010_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:41 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000018_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:43 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000014_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:47 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000028_0, Status : FAILED Error: Java heap space 14/03/07 14:02:50 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000002_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:51 INFO mapreduce.Job: map 16% reduce 0% 14/03/07 14:02:51 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000005_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:52 INFO mapreduce.Job: map 15% reduce 0% 14/03/07 14:02:55 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000006_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:57 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000027_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:02:58 INFO mapreduce.Job: map 14% reduce 0% 14/03/07 14:03:04 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000009_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:03:05 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000017_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:03:05 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000022_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:03:06 INFO mapreduce.Job: map 12% reduce 0% 14/03/07 14:03:10 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000001_0, Status : FAILED Error: GC overhead limit exceeded 14/03/07 14:03:11 INFO mapreduce.Job: map 13% reduce 0% 14/03/07 14:03:11 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000024_0, Status : FAILED and then i add a parameter "mapred.child.java.opts" to the file "mapred-site.xml", mapred.child.java.opts -Xmx1024m then another error occurs as below 14/03/07 11:21:51 INFO mapreduce.Job: map 0% reduce 0% 14/03/07 11:21:59 INFO mapreduce.Job: Task Id : attempt_1394160253524_0003_m_000002_0, Status : FAILED Container [pid=5592,containerID=container_1394160253524_0003_01_000004] is running beyond virtual memory limits. Current usage: 112.6 MB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1394160253524_0003_01_000004 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 5598 5592 5592 5592 (java) 563 14 2778632192 28520 /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx2048m - Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp - Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attempt_1394160253524_0003_m_000002_0 4 |- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin/bash -c /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx2048m - Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000004/tmp - Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attempt_1394160253524_0003_m_000002_0 4 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stdout 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004/stderr Container killed on request. Exit code is 143 14/03/07 11:22:02 INFO mapreduce.Job: Task Id : attempt_1394160253524_0003_m_000001_0, Status : FAILED Container [pid=5182,containerID=container_1394160253524_0003_01_000003] is running beyond virtual memory limits. Current usage: 118.5 MB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1394160253524_0003_01_000003 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 5182 4313 5182 5182 (bash) 0 0 108650496 303 /bin/bash -c /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx2048m - Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp - Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attempt_1394160253524_0003_m_000001_0 3 1>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stdout 2>/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003/stderr |- 5187 5182 5182 5182 (java) 616 19 2783928320 30028 /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx2048m - Djava.io.tmpdir=/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0003_01_000003/tmp - Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/var/log/hadoop/yarn/userlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attempt_1394160253524_0003_m_000001_0 3 Container killed on request. Exit code is 143 at last, the task failed. Thanks for any help! --047d7b5d8d9d9ba88804f3fe7069 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi:

=A0 =A0 =A0i have a problem when ru= n Hibench with hadoop-2.2.0, the wrong message list as below

=
14/03/07 13:54:53 INFO mapreduce.Job: =A0map 19% reduce 0%<= /div>
14/03/07 13:54:54 INFO mapreduce.Job: =A0map 21% reduce 0%
14/03/= 07 14:00:26 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_0000= 20_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:00:27 INFO mapreduce.Job: =A0map 20% reduce 0%
14/03/= 07 14:00:40 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_0000= 08_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:00:41 INFO mapreduce.Job: =A0map 19% reduce 0%
14/03/= 07 14:00:59 INFO mapreduce.Job: =A0map 20% reduce 0%
14/03/07 14:= 00:59 INFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000015_0, = Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:01:00 INFO map= reduce.Job: =A0map 19% reduce 0%
14/03/07 14:01:03 INFO mapreduce= .Job: Task Id : attempt_1394160253524_0010_m_000023_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:01:11 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000026_0, Status : FAILE= D
Error: GC overhead limit exceeded
14/03/07 14:01:35 I= NFO mapreduce.Job: =A0map 20% reduce 0%
14/03/07 14:01:35 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000019_0, Status : FAILED
Error: GC overhead limit exceede= d
14/03/07 14:01:36 INFO mapreduce.Job: =A0map 19% reduce 0%
14/03/07 14:01:43 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000007_0, Status : FAILED
Error: GC overhead limit exceede= d
14/03/07 14:02:00 INFO mapreduce.Job: Task Id : attempt_1394160= 253524_0010_m_000000_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:01 INFO map= reduce.Job: =A0map 18% reduce 0%
14/03/07 14:02:23 INFO mapreduce= .Job: Task Id : attempt_1394160253524_0010_m_000021_0, Status : FAILED
Error: Java heap space
14/03/07 14:02:24 INFO mapreduce.Job:= =A0map 17% reduce 0%
14/03/07 14:02:31 INFO mapreduce.Job: =A0ma= p 18% reduce 0%
14/03/07 14:02:33 INFO mapreduce.Job: Task Id : a= ttempt_1394160253524_0010_m_000029_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:34 INFO map= reduce.Job: =A0map 17% reduce 0%
14/03/07 14:02:38 INFO mapreduce= .Job: Task Id : attempt_1394160253524_0010_m_000010_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:41 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000018_0, Status : FAILE= D
Error: GC overhead limit exceeded
14/03/07 14:02:43 I= NFO mapreduce.Job: Task Id : attempt_1394160253524_0010_m_000014_0, Status = : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:47 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000028_0, Status : FAILE= D
Error: Java heap space
14/03/07 14:02:50 INFO mapredu= ce.Job: Task Id : attempt_1394160253524_0010_m_000002_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:51 INFO map= reduce.Job: =A0map 16% reduce 0%
14/03/07 14:02:51 INFO mapreduce= .Job: Task Id : attempt_1394160253524_0010_m_000005_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:52 INFO map= reduce.Job: =A0map 15% reduce 0%
14/03/07 14:02:55 INFO mapreduce= .Job: Task Id : attempt_1394160253524_0010_m_000006_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:02:57 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000027_0, Status : FAILE= D
Error: GC overhead limit exceeded
14/03/07 14:02:58 I= NFO mapreduce.Job: =A0map 14% reduce 0%
14/03/07 14:03:04 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000009_0, Status : FAILED
Error: GC overhead limit exceede= d
14/03/07 14:03:05 INFO mapreduce.Job: Task Id : attempt_1394160= 253524_0010_m_000017_0, Status : FAILED
Error: GC overhead limit exceeded
14/03/07 14:03:05 INFO map= reduce.Job: Task Id : attempt_1394160253524_0010_m_000022_0, Status : FAILE= D
Error: GC overhead limit exceeded
14/03/07 14:03:06 I= NFO mapreduce.Job: =A0map 12% reduce 0%
14/03/07 14:03:10 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000001_0, Status : FAILED
Error: GC overhead limit exceede= d
14/03/07 14:03:11 INFO mapreduce.Job: =A0map 13% reduce 0%
14/03/07 14:03:11 INFO mapreduce.Job: Task Id : attempt_1394160253524_= 0010_m_000024_0, Status : FAILED

and then i = add a parameter "mapred.child.java.opts" to the file "mapred= -site.xml",=A0
=A0 <property>
=A0 =A0 =A0 =A0 <name>mapred= .child.java.opts</name>
=A0 =A0 =A0 =A0 <value>-Xmx10= 24m</value>
=A0 </property>
then anot= her error occurs as below

14/03/07 11:21:51 INFO mapreduce.Job: =A0map 0% re= duce 0%
14/03/07 11:21:59 INFO mapreduce.Job: Task Id : attempt_1= 394160253524_0003_m_000002_0, Status : FAILED
Container [pid=3D55= 92,containerID=3Dcontainer_1394160253524_0003_01_000004] is running beyond = virtual memory limits. Current usage: 112.6 MB of 1 GB physical memory used= ; 2.7 GB of=A0

2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1394160253524_0003_01_000004 :
|- PID PPID PGRPI= D SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYT= ES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 5598 5592 5592 5= 592 (java) 563 14 2778632192 28520 /usr/java/jdk1.7.0_45/bin/java -Djava.ne= t.preferIPv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN -Xmx2048m -

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercache/ha= doop/appcache/application_1394160253524_0003/container_1394160253524_0003_0= 1_000004/tmp -

Dlog4j.configuration=3Dcontainer-lo= g4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/userlogs= /application_1394160253524_0003/container_1394160253524_0003_01_000004=A0

-Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.log= ger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attemp= t_1394160253524_0003_m_000002_0 4=A0
|- 5592 4562 5592 5592 (bash) 0 0 108650496 300 /bin= /bash -c /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=3Dtrue -= Dhadoop.metrics.log.level=3DWARN =A0-Xmx2048m -

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercac= he/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0= 003_01_000004/tmp -

Dlog4j.configuration=3Dcontain= er-log4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/use= rlogs/application_1394160253524_0003/container_1394160253524_0003_01_000004= =A0

-Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.log= ger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attemp= t_1394160253524_0003_m_000002_0 4=A0

1>/var/log= /hadoop/yarn/userlogs/application_1394160253524_0003/container_139416025352= 4_0003_01_000004/stdout=A0

2>/var/log/hadoop/yarn/userlogs/application_13941602= 53524_0003/container_1394160253524_0003_01_000004/stderr =A0

=
Container killed on request. Exit code is 143
14/03/07= 11:22:02 INFO mapreduce.Job: Task Id : attempt_1394160253524_0003_m_000001= _0, Status : FAILED
Container [pid=3D5182,containerID=3Dcontainer_1394160253524_0003_01_00= 0003] is running beyond virtual memory limits. Current usage: 118.5 MB of 1= GB physical memory used; 2.7 GB of=A0

2.1 GB virt= ual memory used. Killing container.
Dump of the process-tree for container_1394160253524_0003_01_000003 :<= /div>
|- PID PPID PG= RPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(= BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 5182 4313 5182 5= 182 (bash) 0 0 108650496 303 /bin/bash -c /usr/java/jdk1.7.0_45/bin/java -D= java.net.preferIPv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN =A0-Xmx20= 48m -

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercac= he/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0= 003_01_000003/tmp -

Dlog4j.configuration=3Dcontain= er-log4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/use= rlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003= =A0

-Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.log= ger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attemp= t_1394160253524_0003_m_000001_0 3=A0

1>/var/log= /hadoop/yarn/userlogs/application_1394160253524_0003/container_139416025352= 4_0003_01_000003/stdout=A0

2>/var/log/hadoop/yarn/userlogs/application_13941602= 53524_0003/container_1394160253524_0003_01_000003/stderr =A0
|- 5187 5182 5182 5182 (java= ) 616 19 2783928320 30028 /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferI= Pv4Stack=3Dtrue -Dhadoop.metrics.log.level=3DWARN -Xmx2048m -

Djava.io.tmpdir=3D/home/hadoop/tmp/nm-local-dir/usercac= he/hadoop/appcache/application_1394160253524_0003/container_1394160253524_0= 003_01_000003/tmp -

Dlog4j.configuration=3Dcontain= er-log4j.properties -Dyarn.app.container.log.dir=3D/var/log/hadoop/yarn/use= rlogs/application_1394160253524_0003/container_1394160253524_0003_01_000003= =A0

-Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.log= ger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 10.239.44.34 46837 attemp= t_1394160253524_0003_m_000001_0 3=A0

Container kil= led on request. Exit code is 143

at last, the task failed.=A0
Thanks for= any help!
--047d7b5d8d9d9ba88804f3fe7069--