hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Varun Saxena <vsaxena.va...@gmail.com>
Subject Re: yarn.nodemanager.resource.cpu-vcores vs yarn.scheduler.maximum-allocation-vcores
Date Sun, 23 Aug 2015 17:00:31 GMT
Hi Pedro,

Actual allocation would depend on the total resource capability advertised
by NM while registering with RM.

yarn.scheduler.maximum-allocation-vcores merely puts an upper cap on
number of vcores which can be allocated by RM i.e. any Resource
request/ask from AM which asks for vcores > 32(default value) for a
container, will be normalized back to 32.

If there is no such node available, this allocation will not be fulfilled.

yarn.scheduler.maximum-allocation-vcores will be configured in resource
manager and hence will be common for a cluster which can possibly have
multiple nodes with heterogeneous resource capabilities

yarn.nodemanager.resource.cpu-vcores on the other hand will have to be
configured as per resource capability of that particular node.

Recently there has been work done to automatically get memory and CPU
information from underlying OS(supported OS being Linux and Windows) if
configured to do so. This change would be available in 2.8
I hope this answers your question.

Varun Saxena.

On Sun, Aug 23, 2015 at 9:40 PM, Pedro Magalhaes <pedrorjbr@gmail.com>

> I was looking at default parameters for:
> yarn.nodemanager.resource.cpu-vcores = 8
> yarn.scheduler.maximum-allocation-vcores = 32
> For me this two parameters as default doesnt make any sense.
> The first one say "the number of CPU cores that can be allocated for
> containers." (I imagine that is vcore) The seconds says: "The maximum
> allocation for every container request at the RM". In my opinion, the
> second one must be equal or less than the first one.
> How can allocate 32 vcores for a container if i have only 8 cores
> available per container?

View raw message