hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sandy Ryza <sandy.r...@cloudera.com>
Subject Re: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work
Date Tue, 23 Jul 2013 00:17:21 GMT
Hi Sam,

LinuxResourceCalculatorPlugin and DominantResourceCalculator control
separate things.  The former is for a NodeManager to calculate the resource
usage of a container process so that it can kill it if it gets too large.
 The latter is used by the Capacity Scheduler to allocate containers, and,
if you're using the Capacity Scheduler, in theory should do what you're
expecting it to do.  Based on the fix version of YARN-2,
DominantResourceCalculator should be included in 2.0.4-alpha.  The Fair
Scheduler will support CPU-based scheduling as well starting in 2.1.0-beta.

-Sandy


On Sat, Jul 20, 2013 at 11:04 PM, sam liu <samliuhadoop@gmail.com> wrote:

> Thanks, but seems it does not work for me.
>
> My hadoop version is 'Hadoop 2.0.4-alpha', and seems it does not include a
> class named
> 'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator', so I
> replaced it with
> 'org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin'. My
> configuration is below, and use the default values for other
> LinuxContainerExecutor configurations. In my expectation, if I set the
> value of 'yarn.nodemanager.resource.cpu-cores' to 0, no job could be
> completed by nodemanager as the slaves do not have any cpu to use. But, in
> fact, my job could be completed successfully. Why?
>
>  <property>
>     <name>yarn.nodemanager.resource.cpu-cores</name>
>     <value>0</value>
>   </property>
>
>  <property>
>     <name>yarn.nodemanager.vcores-pcores-ratio</name>
>     <value>0</value>
>   </property>
>
>  <property>
>
> <name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
>
> <value>org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin</value>
>   </property>
>
>  <property>
>     <name>yarn.nodemanager.container-executor.class</name>
>
> <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
>   </property>
>
>
>
> 2013/7/4 Chuan Liu <chuanliu@microsoft.com>
>
>>  I think you need to change the following configurations in
>> yarn-site.xml to enable CPU resource limits.****
>>
>> ** **
>>
>> ‘yarn.nodemanager.container-monitor.resource-calculator.class’
>> ‘org.apache.hadoop.yarn.util.resource.DominantResourceCalculator’****
>>
>> ** **
>>
>> ‘yarn.nodemanager.container-executor.class’****
>>
>> ‘org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor’****
>>
>> ** **
>>
>> Some LinuxContainerExecutor configurations:****
>>
>> yarn.nodemanager.linux-container-executor.path****
>>
>> yarn.nodemanager.linux-container-executor.resources-handler.class****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.hierarchy****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.mount****
>>
>> yarn.nodemanager.linux-container-executor.cgroups.mount-path****
>>
>> ** **
>>
>> -Chuan****
>>
>> ** **
>>
>> *From:* sam liu [mailto:samliuhadoop@gmail.com]
>> *Sent:* Tuesday, July 02, 2013 8:33 PM
>> *To:* user@hadoop.apache.org
>> *Subject:* Parameter 'yarn.nodemanager.resource.cpu-cores' does not work*
>> ***
>>
>> ** **
>>
>> Hi,****
>>
>> With Hadoop 2.0.4-alpha, yarn.nodemanager.resource.cpu-cores does not
>> work for me:****
>>
>> 1. The performance of running same terasort job do not change, even after
>> increasing or decreasing the value of 'yarn.nodemanager.resource.cpu-cores'
>> in yarn-site.xml and restart the yarn cluster.  ****
>>
>> 2. Even if I set the value of both
>> 'yarn.nodemanager.resource.cpu-cores' and
>> 'yarn.nodemanager.vcores-pcores-ratio' to 0, the MR job still could
>> complete without any exception, but the expected behavior should be that no
>> cpu could be assigned to the container, and then no job could be executed
>> on the cluster. Right?****
>>
>> Thanks!****
>>
>> ** **
>>
>
>

Mime
View raw message