Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1FF3317A17 for ; Fri, 20 Feb 2015 17:51:53 +0000 (UTC) Received: (qmail 20030 invoked by uid 500); 20 Feb 2015 17:51:44 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 19922 invoked by uid 500); 20 Feb 2015 17:51:44 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 19912 invoked by uid 99); 20 Feb 2015 17:51:44 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 20 Feb 2015 17:51:44 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of tesmai4@gmail.com designates 209.85.215.52 as permitted sender) Received: from [209.85.215.52] (HELO mail-la0-f52.google.com) (209.85.215.52) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 20 Feb 2015 17:51:19 +0000 Received: by labhz20 with SMTP id hz20so7651999lab.0 for ; Fri, 20 Feb 2015 09:49:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:date:message-id:subject:to:content-type; bh=QAaFbwuNb+waxAF+XZZU7OV/D353FNIUQBAqON+Jf4U=; b=yNAyasDHLZRwLupx9J0vpySC+xqD+0sFSFdIm9IFMKSeAbA52awKopXV6l6cO9eKV/ Ba1kdWwNgPRnImIfihflmtAmpKJYSb0GcFD8zYGEpLUDufLzWV3O0F3/v2hMmGbe0cvG YeM5BLY6T9DvQlOmEvB/xaAvx3e+TtzK6xH+x3jayaW+8sXJTJyCMJ3NwuR5Q9GPcH3F +31OZC0U4hp/je7Bll2L1EFRtHETRSP0okZ1JxkZMj4ExJ9Ce9ZGLYZ+VO2hHcyrY6no ugZln0U0y5jxaopZpfuMvY530Yyk5nlyp+4lkrF+q/Jz+w+ZvcVh7K6R0ij7iYy41x4m lNpw== X-Received: by 10.153.4.44 with SMTP id cb12mr9630401lad.26.1424454588013; Fri, 20 Feb 2015 09:49:48 -0800 (PST) MIME-Version: 1.0 Received: by 10.112.21.68 with HTTP; Fri, 20 Feb 2015 09:49:27 -0800 (PST) From: "tesmai4@gmail.com" Date: Fri, 20 Feb 2015 17:49:27 +0000 Message-ID: Subject: Fwd: YARN container lauch failed exception and mapred-site.xml configuration To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a1133afa0b7a423050f88adea X-Virus-Checked: Checked by ClamAV on apache.org --001a1133afa0b7a423050f88adea Content-Type: text/plain; charset=UTF-8 I have 7 nodes in my Hadoop cluster [8GB RAM and 4VCPUs to each nodes], 1 Namenode + 6 datanodes. I followed the link o horton works [ http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html] and made calculation according to the hardware configruation on my nodes and have added the update mapred-site and yarn-site.xml files in my question. Still my application is crashing with the same exection My mapreduce application has 34 input splits with a block size of 128MB. **mapred-site.xml** has the following properties: mapreduce.framework.name = yarn mapred.child.java.opts = -Xmx2048m mapreduce.map.memory.mb = 4096 mapreduce.map.java.opts = -Xmx2048m **yarn-site.xml** has the following properties: yarn.resourcemanager.hostname = hadoop-master yarn.nodemanager.aux-services = mapreduce_shuffle yarn.nodemanager.resource.memory-mb = 6144 yarn.scheduler.minimum-allocation-mb = 2048 yarn.scheduler.maximum-allocation-mb = 6144 Exception from container-launch: ExitCodeException exitCode=134: /bin/bash: line 1: 3876 Aborted (core dumped) /usr/lib/jvm/java-7-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx8192m -Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1424264025191_0002/container_1424264025191_0002_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 192.168.0.12 50842 attempt_1424264025191_0002_m_000005_0 11 > /home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011/stdout 2> /home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011/stderr How can avoid this?any help is appreciated Is there an option to restrict number of containers on hadoop ndoes? --001a1133afa0b7a423050f88adea Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

I have 7 nodes in my Hadoop cluster = [8GB RAM and 4VCPUs to each nodes], 1 Namenode + 6 datanodes.
I followed the link o horton works [http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0= .6.0/bk_installing_manually_book/content/rpm-chap1-11.html] and made = =C2=A0calculation according to the hardware configruation on my nodes and h= ave added the update mapred-site and yarn-site.xml files in my question. St= ill my application is crashing with the same exection
=C2=A0
My mapreduce application has 34 input splits with a block size of 128= MB.

**mapred-site.xml** has the =C2=A0following pr= operties:

=C2=A0 =C2=A0 mapreduce.framework.name =C2=A0=3D = yarn
=C2=A0 =C2=A0 mapred.child.java.opts =C2=A0 =C2=A0=3D -Xmx20= 48m
=C2=A0 =C2=A0 mapreduce.map.memory.mb =C2=A0 =3D 4096
=C2=A0 =C2=A0 mapreduce.map.java.opts =C2=A0 =3D -Xmx2048m
**yarn-site.xml** has the =C2=A0following properties:

=C2=A0 =C2=A0 yarn.resourcemanager.hostname =C2=A0 =C2=A0 = =C2=A0 =C2=A0=3D hadoop-master
=C2=A0 =C2=A0 yarn.nodemanager.aux= -services =C2=A0 =C2=A0 =C2=A0 =C2=A0=3D mapreduce_shuffle
=C2=A0= =C2=A0 yarn.nodemanager.resource.memory-mb =C2=A0=3D 6144
=C2=A0= =C2=A0 yarn.scheduler.minimum-allocation-mb =3D 2048
=C2=A0 =C2= =A0 yarn.scheduler.maximum-allocation-mb =3D 6144

=
=C2=A0Exception from container-launch: ExitCodeException exi= tCode=3D134: /bin/bash: line 1: =C2=A03876 Aborted =C2=A0(core dumped) /usr= /lib/jvm/java-7-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=3Dtrue -D= hadoop.metrics.log.level=3DWARN -Xmx8192m -Djava.io.tmpdir=3D/tmp/hadoop-ub= untu/nm-local-dir/usercache/ubuntu/appcache/application_1424264025191_0002/= container_1424264025191_0002_01_000011/tmp -Dlog4j.configuration=3Dcontaine= r-log4j.properties -Dyarn.app.container.log.dir=3D/home/ubuntu/hadoop/logs/= userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000= 011
=C2=A0 =C2=A0 -Dyarn.app.container.log.filesize=3D0 -Dhad= oop.root.logger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 192.168.0.12 = 50842 attempt_1424264025191_0002_m_000005_0 11 >=C2=A0
=C2=A0 = =C2=A0 /home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/con= tainer_1424264025191_0002_01_000011/stdout 2>=C2=A0
=C2=A0 =C2= =A0 /home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/contai= ner_1424264025191_0002_01_000011/stderr


=
How can avoid this?any help is appreciated=C2=A0

<= div>Is there an option to restrict number of containers on hadoop ndoes?



--001a1133afa0b7a423050f88adea--