Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0B1BB100E8 for ; Wed, 28 May 2014 00:56:48 +0000 (UTC) Received: (qmail 2898 invoked by uid 500); 28 May 2014 00:56:43 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 2784 invoked by uid 500); 28 May 2014 00:56:43 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 2777 invoked by uid 99); 28 May 2014 00:56:43 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 May 2014 00:56:43 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of haribaha@gmail.com designates 209.85.215.51 as permitted sender) Received: from [209.85.215.51] (HELO mail-la0-f51.google.com) (209.85.215.51) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 May 2014 00:56:40 +0000 Received: by mail-la0-f51.google.com with SMTP id gf5so6802040lab.24 for ; Tue, 27 May 2014 17:56:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=Dp8Tl3vFT3WGtMUjNQJd3zXGAzBoAg8DZNohFQdKkJo=; b=Pys9NFVeS5Uu/9cPT7FhXVYsJCOviUcFNm1WxX9J2Hk8cgfyrzea/W3YpVumXCoHD1 3nnfRj4+D+xFgklDX0D8U7ohFmUkahGprRGfHhTECrBuYo3f9FGSaYhdiB1EZXu/s5b8 8dGbAUK9XS+dP2zDKWeXditeE8ATYxfBuPPZqewcykmbLQPlkv2q5KwWXu6dmq06P4dQ 29ltXfE0EkOn4kI/mV+MgGGLZmS3kCULvUQqkUlD2UDyVSOeYRwnHMtMhVUP41N1g/KO NCOXHPdqc3lZiCQeJ/KzcXeLG1CBgg5KiIK7WXd7qN9yYz5nsKNzmZn1qmtF0gFyChMV DP+g== MIME-Version: 1.0 X-Received: by 10.152.26.168 with SMTP id m8mr5983931lag.65.1401238576798; Tue, 27 May 2014 17:56:16 -0700 (PDT) Received: by 10.112.3.170 with HTTP; Tue, 27 May 2014 17:56:16 -0700 (PDT) In-Reply-To: References: Date: Tue, 27 May 2014 20:56:16 -0400 Message-ID: Subject: Re: YARN creates only 1 container From: hari To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e0160c2ba9da13604fa6b47ee X-Virus-Checked: Checked by ClamAV on apache.org --089e0160c2ba9da13604fa6b47ee Content-Type: text/plain; charset=UTF-8 The issue was not related the configuration related to containers. Due to misconfiguration, the Application master was not able to contact resourcemanager causing in the 1 container problem. However, the total containers allocated still is not as expected. The configuration settings should have resulted in 16 containers per node, but it is allocating 64 containers per node. Reiterating the config parameters here again: mapred-site.xml mapreduce.map.cpu.vcores = 1 mapreduce.reduce.cpu.vcores = 1 mapreduce.map.memory.mb = 1024 mapreduce.reduce.memory.mb = 1024 mapreduce.map.java.opts = -Xmx1024m mapreduce.reduce.java.opts = -Xmx1024m yarn.xml yarn.nodemanager.resource.memory-mb = 65536 yarn.nodemanager.resource.cpu-vcores = 16 yarn.scheduler.minimum-allocation-mb = 1024 yarn.scheduler.maximum-allocation-mb = 2048 yarn.scheduler.minimum-allocation-vcores = 1 yarn.scheduler.maximum-allocation-vcores = 1 Is there anything else that might be causing this problem ? thanks, hari On Tue, May 27, 2014 at 3:31 AM, hari wrote: > Hi, > > When using YARN 2.2.0 version, only 1 container is created > for an application in the entire cluster. > The single container is created at an arbitrary node > for every run. This happens when running any application from > the examples jar (e.g., wordcount). Currently only one application is > run at a time. The input datasize is > 200GB. > > I am setting custom values that affect concurrent container count. > These config parameters were mostly taken from: > > http://blog.cloudera.com/blog/2014/04/apache-hadoop-yarn-avoiding-6-time-consuming-gotchas/ > These wasn't much description elsewhere on how the container count would > be > decided. > > The settings are: > > mapred-site.xml > mapreduce.map.cpu.vcores = 1 > mapreduce.reduce.cpu.vcores = 1 > mapreduce.map.memory.mb = 1024 > mapreduce.reduce.memory.mb = 1024 > mapreduce.map.java.opts = -Xmx1024m > mapreduce.reduce.java.opts = -Xmx1024m > > yarn.xml > yarn.nodemanager.resource.memory-mb = 65536 > yarn.nodemanager.resource.cpu-vcores = 16 > > From these settings, each node should be running 16 containers. > > Let me know if there might be something else affecting the container > count. > > thanks, > hari > > > > > > --089e0160c2ba9da13604fa6b47ee Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
The issue was not related the configuration related t= o containers. Due to
misconfiguration, the Application master was not a= ble to contact resourcemanager
causing in the 1 container pro= blem.

However, the total containers allocated still is not as expe= cted. The configuration settings
should have resulted in 16 containers = per node, but it is allocating 64 containers per node.

Reiterating the config parameters here again:

mapred-sit= e.xml
mapreduce.map.cpu.vcores =3D 1
mapreduce.reduce.cpu.vcores =3D = 1
mapreduce.map.memory.mb =3D 1024
mapreduce.reduce.memory.mb =3D 1024
= mapreduce.map.java.opts =3D -Xmx1024m
mapreduce.reduce.java.opts =3D -Xm= x1024m

yarn.xml
yarn.nodemanager.resource.memory-mb =3D 65536
=
yarn.nodemanager.resource.cpu-vcores =3D 16
yarn.scheduler.minimum-alloc= ation-mb =3D 1024
yarn.scheduler.maximum-allocation-mb=C2=A0 =3D 2048yarn.scheduler.minimum-allocation-vcores =3D 1
yarn.scheduler.maximum-a= llocation-vcores =3D 1

Is there anything else that might be causing this problem ? =

thanks,
hari





On Tue, May 27, 2014 at 3:31 AM, hari <haribaha@gmail.com> = wrote:
Hi,

When = using YARN 2.2.0 version, only 1 container is created
for an applicatio= n in the entire cluster.
The single container is created at an arbitrar= y node
for every run. This happens when running any application from
the examp= les jar (e.g., wordcount). Currently only one application is
run at a t= ime. The input datasize is > 200GB.

I am setting custom values t= hat affect concurrent container count.
These config parameters were mostly taken from:
These wasn't much description elsewhere on how the container= count would be
decided.

The settings are:

mapred-site.xml
mapreduce.map.cpu.vcores =3D 1
mapreduce.r= educe.cpu.vcores =3D 1
mapreduce.map.memory.mb =3D 1024
mapreduce.reduce.memory.mb =3D 1024
= mapreduce.map.java.opts =3D -Xmx1024m
mapreduce.reduce.java.opts =3D -Xm= x1024m

yarn.xml
yarn.nodemanager.resource.memory-mb =3D 655= 36
yarn.nodemanager.resource.cpu-vcores =3D 16

From these setting= s, each node should be running 16 containers.

Let me kn= ow if there might be something else affecting the container
count.

thanks,
hari





<= /div>

--089e0160c2ba9da13604fa6b47ee--