hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 杨浩 <yangha...@gmail.com>
Subject Re: WELCOME to user@hadoop.apache.org
Date Mon, 08 Jun 2015 07:44:14 GMT
It seems the parameter "mapreduce.map.memory.mb" is parsed from client.

2015-06-07 15:05 GMT+08:00 J. Rottinghuis <jrottinghuis@gmail.com>:

> On each node you can configure how much memory is available for containers
> to run.
> On the other hand, for each application you can configure how large
> containers should be. For MR apps, you can separately set mappers,
> reducers, and the app master itself.
>
> Yarn will detemine through scheduling rules and depending on locality
> where tasks are run. One app has one container size (per respective
> category map, reduce, AM) that is not driven by nodes. Available node
> memory divided by task size will determine how many tasks run on each node.
> There are minimum and maximum container sizes, so you can avoid running
> crazy things such as 1K 1MB containers for example.
>
> Hope that helps,
>
> Joep
>
> On Thu, Jun 4, 2015 at 6:48 AM, paco <pacopww@gmail.com> wrote:
>
>>
>> Hello,
>>
>> Recently I have increased my physical cluster. I have two kind of nodes:
>>
>> Type 1:
>>     RAM: 24 GB
>>     12 cores
>>
>> Type 2:
>>     RAM: 64 GB
>>     12 cores
>>
>> Theses nodes are in the same physical rack. I would like to configure it
>> to use 12 container per node, in nodes of type 1 each mapper has 1.8GB
>> (22GB / 12 cores = 1.8GB), in nodes of kind 2 each mapper will has 5.3GB
>> (60/12). Is it possible?
>>
>> I have configured so:
>>
>> nodes type 1(slaves):
>> <property>
>> <name>yarn.nodemanager.resource.memory-mb</name>
>>                 <value>22000</value>
>> </property>
>>
>> <property>
>>                 <name>mapreduce.map.memory.mb</name>
>>                 <value>1800</value>
>> </property>
>>     <property>
>>         <name>mapred.map.child.java.opts</name>
>>         <value>-Xmx1800m</value>
>>     </property>
>>
>>
>>
>> nodes type 2(slaves):
>> <property>
>> <name>yarn.nodemanager.resource.memory-mb</name>
>>                 <value>60000</value>
>> </property>
>>
>> <property>
>>                 <name>mapreduce.map.memory.mb</name>
>>                 <value>5260</value>
>> </property>
>>     <property>
>>         <name>mapred.map.child.java.opts</name>
>>         <value>-Xmx5260m</value>
>>     </property>
>>
>>
>>
>> Hadoop is creating mapper with 1 GB of memory like:
>>
>> Nodes of kind 1:
>> 20GB/1GB = 20 container which it is executing with -Xmx1800
>>
>> Nodes of kind 2:
>> 60GB/1GB = 60 container which it is executing with -Xmx5260
>>
>>
>> Thanks!
>>
>>
>>
>

Mime
View raw message