hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From SF Hadoop <sfhad...@gmail.com>
Subject Re: Hadoop configuration for cluster machines with different memory capacity / # of cores etc.
Date Thu, 09 Oct 2014 17:48:22 GMT
Yes.  You are correct.  Just keep in mind, for every spec X machine you
have to have version X of hadoop configs (that only reside on spec X
machines).  Version Y configs reside on only version Y machines, and so on.

But yes, it is possible.

On Thu, Oct 9, 2014 at 9:40 AM, Manoj Samel <manojsameltech@gmail.com>
wrote:

> So, in that case, the resource manager will allocate containers of
> different capacity based on node capacity ?
>
> Thanks,
>
> On Wed, Oct 8, 2014 at 9:42 PM, Nitin Pawar <nitinpawar432@gmail.com>
> wrote:
>
>> you can have different values on different nodes
>>
>> On Thu, Oct 9, 2014 at 4:15 AM, Manoj Samel <manojsameltech@gmail.com>
>> wrote:
>>
>>> In a hadoop cluster where different machines have different memory
>>> capacity and / or different # of cores etc., it is required that
>>> memory/core related parameters be set to SAME for all nodes ? Or it is
>>> possible to set different values for different nodes ?
>>>
>>> E.g. can yarn.nodemanager.resource.memory-mb
>>> and yarn.nodemanager.resource.cpu-vcores have different values for
>>> different nodes ?
>>>
>>> Thanks,
>>>
>>>
>>>
>>
>>
>> --
>> Nitin Pawar
>>
>
>

Mime
View raw message