Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id EA078179D7 for ; Wed, 28 Jan 2015 09:11:07 +0000 (UTC) Received: (qmail 77563 invoked by uid 500); 28 Jan 2015 09:11:03 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 77449 invoked by uid 500); 28 Jan 2015 09:11:03 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 77336 invoked by uid 99); 28 Jan 2015 09:10:57 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Jan 2015 09:10:57 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=FREEMAIL_REPLY,HTML_MESSAGE,MIME_QP_LONG_LINE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of yuzhihong@gmail.com designates 209.85.192.182 as permitted sender) Received: from [209.85.192.182] (HELO mail-pd0-f182.google.com) (209.85.192.182) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Jan 2015 09:10:53 +0000 Received: by mail-pd0-f182.google.com with SMTP id z10so24466485pdj.13 for ; Wed, 28 Jan 2015 01:09:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:content-type:content-transfer-encoding:mime-version:subject :message-id:date:references:in-reply-to:to; bh=gGJirId6d+S7adxmSsQ363xyQQYQ4t9k9+cs5winr4c=; b=IFA9EfQsajNAyoqDe7f6pkaRwCl+Ghwm9G5KU8MGMw/gatzsaKvFZR19Wm3dBrebo6 rxOh93YVGLixVOlHs+lvwpAVQpAZCdyL4vpsok/AsyYSHSRGfEKSm2wyDHT0wWBSVSh2 PnCrV7lFSGA6fD4LoOjsD5JzIybXJoSvxNwKSmtnw6O0GnC1JgESTByDIYPiZ7rCjMkQ PXUmnPAVbpzgq2bfj0VcSSppUfbRp9z2sIjJZ8rSmsr3uR/t2piVYs6XGE5feYky+qCS OjJWXZUIIkcOFZCgdRhM0VGwDJul6ZZgxqe7wB9jxT4ZxQFMRUZfJzb/RDCA2fSx3Z9H HGow== X-Received: by 10.66.157.67 with SMTP id wk3mr3930356pab.95.1422436188145; Wed, 28 Jan 2015 01:09:48 -0800 (PST) Received: from [192.168.0.20] (c-24-130-236-83.hsd1.ca.comcast.net. [24.130.236.83]) by mx.google.com with ESMTPSA id jz5sm4029775pbc.0.2015.01.28.01.09.38 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 28 Jan 2015 01:09:39 -0800 (PST) From: Ted Yu Content-Type: multipart/alternative; boundary=Apple-Mail-D31353BC-7511-4DD6-8C73-DF0F8CD3AA47 Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (1.0) Subject: Re: Question about YARN Memory allocation Message-Id: <0D725B0D-99A9-4BB8-AB24-C1A6CED2B249@gmail.com> Date: Wed, 28 Jan 2015 01:09:37 -0800 References: In-Reply-To: To: "user@hadoop.apache.org" X-Mailer: iPhone Mail (12B440) X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail-D31353BC-7511-4DD6-8C73-DF0F8CD3AA47 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable LCE refers to Linux Container Executor Please take a look at yarn-default.xml Cheers > On Jan 28, 2015, at 12:49 AM, =EC=9E=84=EC=A0=95=ED=83=9D wrote: >=20 > Hi! >=20 > At first, it was my mistake. :( All memory is "in use". > Also I found each Container's information says that "TotalMemoryNeeded 204= 8 / TotalVCoreNeeded 1". > I don't understand why container needs 2048m (2G) of memory to run. >=20 > Maybe I have to learn YARN schedulers and relevant configurations. > I'm newbie of YARN, and learning it by reading some docs. :) >=20 > Btw, what's LCE and DRC? >=20 > Thanks again for helping. >=20 > Regards. > Jungtaek Lim (HeartSaVioR) >=20 >=20 > 2015-01-28 17:35 GMT+09:00 Naganarasimha G R (Naga) : >> Hi Jungtaek Lim, >> Earlier we faced similar problem of reservation with Capacity scheduler a= nd its actually solved with YARN-1769 (part of 2.6 hadoop) >> So hope it might help you if you have configured Capacity scheduler. also= check whether "yarn.scheduler.capacity.node-locality-delay" is configured (= might not be direct help but might reduce probability of reservation ). >> I have one doubt with info : In the image it seems to be 20GB and 10 vcor= es reserved but you seem to say all are reserved ? >> Is LCE & DRC also configured ? if so what are the vcores configured for N= M and the app's containers? >>=20 >> Regards, >> Naga >> From: =EC=9E=84=EC=A0=95=ED=83=9D [kabhwan@gmail.com] >> Sent: Wednesday, January 28, 2015 13:23 >> To: user@hadoop.apache.org >> Subject: Re: Question about YARN Memory allocation >>=20 >> Forgot to add one thing, all memory (120G) is reserved now. >>=20 >> Apps Submitted Apps Pending Apps Running Apps Completed Con= tainers Running Memory Used Memory Total Memory Reserved VCo= res Used VCores Total VCores Reserved Active Nodes Dec= ommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted No= des >> 2 1 1 0 60 120 GB 120 GB 20 GB 60 80= 10 10 0 0 0 0 >> Furthermore, 10 more VCores are reserved. I don't know what is it. >>=20 >>=20 >> 2015-01-28 16:47 GMT+09:00 =EC=9E=84=EC=A0=95=ED=83=9D : >>> Hello all! >>>=20 >>> I'm new to YARN, so it could be beginner question. >>> (I've been used MRv1 and changed just now.) >>>=20 >>> I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0). >>> In order to migrate MRv1 to YARN, I read several docs, and change config= rations. >>>=20 >>> ``` >>> yarn.nodemanager.resource.memory-mb: 12288 >>> yarn.scheduler.minimum-allocation-mb: 512 >>> mapreduce.map.memory.mb: 1536 >>> mapreduce.reduce.memory.mb: 1536 >>> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=3DUTF-8 -Dfile.client= .encoding=3DUTF-8 -Dclient.encoding.override=3DUTF-8 >>> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=3DUTF-8 -Dfile.cli= ent.encoding=3DUTF-8 -Dclient.encoding.override=3DUTF-8 >>> ``` >>>=20 >>> I'm expecting that it will be 80 containers running concurrently, but in= real it's 60 containers. (59 maps ran concurrently, maybe 1 is ApplicationM= anager.) >>>=20 >>> All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm s= uspecting it. >>> But it's better to make clear, to understand YARN clearer. >>>=20 >>> Any helps & explanations are really appreciated. >>> Thanks! >>>=20 >>> Best regards. >>> Jungtaek Lim (HeartSaVioR) >>=20 >>=20 >>=20 >> --=20 >> Name : =EC=9E=84 =EC=A0=95=ED=83=9D >> Blog : http://www.heartsavior.net / http://dev.heartsavior.net >> Twitter : http://twitter.com/heartsavior >> LinkedIn : http://www.linkedin.com/in/heartsavior >=20 >=20 >=20 > --=20 > Name : =EC=9E=84 =EC=A0=95=ED=83=9D > Blog : http://www.heartsavior.net / http://dev.heartsavior.net > Twitter : http://twitter.com/heartsavior > LinkedIn : http://www.linkedin.com/in/heartsavior --Apple-Mail-D31353BC-7511-4DD6-8C73-DF0F8CD3AA47 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable
LCE refers to Linux Container Executor=

Please take a look at yarn-default.xml

Chee= rs

On Jan 28, 2015, at 12:49 AM, =EC=9E=84=EC=A0=95=ED=83=9D &= lt;kabhwan@gmail.com> wrote:
=
Hi!

At first, it was my mistake. :( All memory is "in use".Also I found each Container's information says that "TotalMemoryNeeded 2048 / TotalVCoreNeeded 1".
I don't understand why container needs 2048m (2G) of me= mory to run.

Maybe I have to learn YARN schedulers a= nd relevant configurations.
= I'm newbie of YARN, and learning it by reading some docs. :)

Btw, what's LCE and DRC?

Thanks again for= helping.

<= div>Regards.
Jungtaek Lim (HeartSaVioR)


2015-01-28 17:= 35 GMT+09:00 Naganarasimha G R (Naga) <garlanaganarasimha@huawei= .com>:
Hi Jungta= ek Lim,
Earlier we faced similar p= roblem of reservation with Capacity scheduler and its actually solved with&n= bsp;YARN-1769 (part of 2.6 hadoop)
So hope i= t might help you if you have configured Capacity scheduler. also check wheth= er "yarn.scheduler.capacity.node-locality-delay" is configured (might n= ot be direct help but might reduce probability of reservation ).
I have on= e doubt with info : In the image it seems to be 20GB and 10 vcores reserved b= ut you seem to say all are reserved ?
Is LCE &a= mp; DRC also configured ? if so what are the vcores configured for NM and th= e app's containers?

Regards,<= /span>
Naga

From:= =EC=9E=84=EC=A0=95=ED=83=9D [kabhwan@gmail.com]
Sent: Wednesday, January 28, 2015 13:23
To: user@= hadoop.apache.org
Subject: Re: Question about YARN Memory allocation

Forgot to add one thing, all memory (120G) is reserved now.=

Apps Submitted Apps Pending Apps Running Apps Completed Containers Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted Nodes
2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0

Furthermore, 10 more VCores are reserved. I don't know what is it.


2015-01-28 16:47 GMT+09:00 =EC=9E=84=EC=A0=95=ED=83= =9D <kabhwan@gmail.com>:
Hello all!

I'm new to YARN, so it could be beginner question.
(I've been used MRv1 and changed just now.)

I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).<= /div>
In order to migrate MRv1 to YARN, I read several docs, and change confi= grations.

```
yarn.nodemanager.resource.memory-mb: 12288
yarn.scheduler.minimum-allocation-mb: 512
mapreduce.map.memory.mb: 1536
mapreduce.reduce.memory.mb: 1536
mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=3DUTF-8 -Dfile.clien= t.encoding=3DUTF-8 -Dclient.encoding.override=3DUTF-8
mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=3DUTF-8 -Dfile.cl= ient.encoding=3DUTF-8 -Dclient.encoding.override=3DUTF-8
```

I'm expecting that it will be 80 containers running concurrently, but i= n real it's 60 containers. (59 maps ran concurrently, maybe 1 is A= pplicationManager.)

All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm= suspecting it.
But it's better to make clear, to understand YARN clearer.

Any helps & explanations are really appreciated.
Thanks!

Best regards.
Jungtaek Lim (HeartSaVioR)




--



--
= --Apple-Mail-D31353BC-7511-4DD6-8C73-DF0F8CD3AA47--