hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 임정택 <kabh...@gmail.com>
Subject Re: Question about YARN Memory allocation
Date Wed, 28 Jan 2015 08:49:28 GMT
Hi!

At first, it was my mistake. :( All memory is "in use".
Also I found each Container's information says that "TotalMemoryNeeded 2048
/ TotalVCoreNeeded 1".
I don't understand why container needs 2048m (2G) of memory to run.

Maybe I have to learn YARN schedulers and relevant configurations.
I'm newbie of YARN, and learning it by reading some docs. :)

Btw, what's LCE and DRC?

Thanks again for helping.

Regards.
Jungtaek Lim (HeartSaVioR)


2015-01-28 17:35 GMT+09:00 Naganarasimha G R (Naga) <
garlanaganarasimha@huawei.com>:

>  Hi Jungtaek Lim,
> Earlier we faced similar problem of reservation with Capacity scheduler
> and its actually solved with YARN-1769 (part of 2.6 hadoop)
> So hope it might help you if you have configured Capacity scheduler. also
> check whether "yarn.scheduler.capacity.node-locality-delay" is configured
> (might not be direct help but might reduce probability of reservation ).
> I have one doubt with info : In the image it seems to be 20GB and 10
> vcores reserved but you seem to say all are reserved ?
> Is LCE & DRC also configured ? if so what are the vcores configured for NM
> and the app's containers?
>
>  Regards,
> Naga
>  ------------------------------
> *From:* 임정택 [kabhwan@gmail.com]
> *Sent:* Wednesday, January 28, 2015 13:23
> *To:* user@hadoop.apache.org
> *Subject:* Re: Question about YARN Memory allocation
>
>   Forgot to add one thing, all memory (120G) is reserved now.
>
>    Apps Submitted Apps Pending Apps Running Apps Completed Containers
> Running Memory Used Memory Total Memory Reserved VCores Used VCores Total VCores
> Reserved Active Nodes Decommissioned Nodes Lost Nodes Unhealthy Nodes Rebooted
> Nodes   2 1 1 0 60 120 GB 120 GB 20 GB 60 80 10 10 0 0 0 0
>  Furthermore, 10 more VCores are reserved. I don't know what is it.
>
>
> 2015-01-28 16:47 GMT+09:00 임정택 <kabhwan@gmail.com>:
>
>> Hello all!
>>
>>  I'm new to YARN, so it could be beginner question.
>> (I've been used MRv1 and changed just now.)
>>
>>  I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
>> In order to migrate MRv1 to YARN, I read several docs, and change
>> configrations.
>>
>>  ```
>>  yarn.nodemanager.resource.memory-mb: 12288
>> yarn.scheduler.minimum-allocation-mb: 512
>> mapreduce.map.memory.mb: 1536
>> mapreduce.reduce.memory.mb: 1536
>> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
>> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
>> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
>>  ```
>>
>>  I'm expecting that it will be 80 containers running concurrently, but
>> in real it's 60 containers. (59 maps ran concurrently, maybe 1 is
>> ApplicationManager.)
>>
>>  All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm
>> suspecting it.
>> But it's better to make clear, to understand YARN clearer.
>>
>>  Any helps & explanations are really appreciated.
>> Thanks!
>>
>>  Best regards.
>> Jungtaek Lim (HeartSaVioR)
>>
>>
>
>
>  --
>  Name : 임 정택
> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
> Twitter : http://twitter.com/heartsavior
> LinkedIn : http://www.linkedin.com/in/heartsavior
>



-- 
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior

Mime
View raw message