Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7D09017904 for ; Mon, 8 Jun 2015 07:44:28 +0000 (UTC) Received: (qmail 16768 invoked by uid 500); 8 Jun 2015 07:44:22 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 16621 invoked by uid 500); 8 Jun 2015 07:44:22 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 16611 invoked by uid 99); 8 Jun 2015 07:44:21 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 08 Jun 2015 07:44:21 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 6D6CFCC602 for ; Mon, 8 Jun 2015 07:44:21 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.879 X-Spam-Level: ** X-Spam-Status: No, score=2.879 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id ROt3FA3ZclkG for ; Mon, 8 Jun 2015 07:44:20 +0000 (UTC) Received: from mail-qk0-f180.google.com (mail-qk0-f180.google.com [209.85.220.180]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id 7792943ACE for ; Mon, 8 Jun 2015 07:44:20 +0000 (UTC) Received: by qkx62 with SMTP id 62so73744624qkx.3 for ; Mon, 08 Jun 2015 00:44:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=9bJ4OaQCiYkje0xkPiXHq5CgyibTScyO+ZcVzYNW6Oo=; b=T9QD7/0xtHwr+DOZqgsTC6G4nnKLJZgRsA1eWCRQGGePqymFpGfC4/1XbI4vZE/EwX /r8Ay/kkzexsubZMT3oOxkkOzFoT9BT9CVWlGG53tjJ4sVYf3YGdn9TFLFZVuaDo0NTn tLbPSnk7cZ5QYOeJeUJIEWwFJdTRVTqWo+DIJApX18liVtTANU6Lf/A1M3nRMLnX+JAJ fED2sSx0sFZfgAdkYnB2eSDU99WGnin9XQaD/ZgY9afX6e2yITdwNA+EAPaD8hdutdNd xnTkjHczMhNWjwiHQBXrKIfeVRVAMKzURF5rUZSDV81ZkQmpZiL54RCO0XcB1ur+m/hx KieA== MIME-Version: 1.0 X-Received: by 10.55.26.147 with SMTP id l19mr29338149qkh.59.1433749454658; Mon, 08 Jun 2015 00:44:14 -0700 (PDT) Received: by 10.96.83.99 with HTTP; Mon, 8 Jun 2015 00:44:14 -0700 (PDT) In-Reply-To: References: <1433425619.31835.ezmlm@hadoop.apache.org> <55705723.7090400@gmail.com> Date: Mon, 8 Jun 2015 15:44:14 +0800 Message-ID: Subject: Re: WELCOME to user@hadoop.apache.org From: =?UTF-8?B?5p2o5rWp?= To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a1146f784f12f430517fcce26 --001a1146f784f12f430517fcce26 Content-Type: text/plain; charset=UTF-8 It seems the parameter "mapreduce.map.memory.mb" is parsed from client. 2015-06-07 15:05 GMT+08:00 J. Rottinghuis : > On each node you can configure how much memory is available for containers > to run. > On the other hand, for each application you can configure how large > containers should be. For MR apps, you can separately set mappers, > reducers, and the app master itself. > > Yarn will detemine through scheduling rules and depending on locality > where tasks are run. One app has one container size (per respective > category map, reduce, AM) that is not driven by nodes. Available node > memory divided by task size will determine how many tasks run on each node. > There are minimum and maximum container sizes, so you can avoid running > crazy things such as 1K 1MB containers for example. > > Hope that helps, > > Joep > > On Thu, Jun 4, 2015 at 6:48 AM, paco wrote: > >> >> Hello, >> >> Recently I have increased my physical cluster. I have two kind of nodes: >> >> Type 1: >> RAM: 24 GB >> 12 cores >> >> Type 2: >> RAM: 64 GB >> 12 cores >> >> Theses nodes are in the same physical rack. I would like to configure it >> to use 12 container per node, in nodes of type 1 each mapper has 1.8GB >> (22GB / 12 cores = 1.8GB), in nodes of kind 2 each mapper will has 5.3GB >> (60/12). Is it possible? >> >> I have configured so: >> >> nodes type 1(slaves): >> >> yarn.nodemanager.resource.memory-mb >> 22000 >> >> >> >> mapreduce.map.memory.mb >> 1800 >> >> >> mapred.map.child.java.opts >> -Xmx1800m >> >> >> >> >> nodes type 2(slaves): >> >> yarn.nodemanager.resource.memory-mb >> 60000 >> >> >> >> mapreduce.map.memory.mb >> 5260 >> >> >> mapred.map.child.java.opts >> -Xmx5260m >> >> >> >> >> Hadoop is creating mapper with 1 GB of memory like: >> >> Nodes of kind 1: >> 20GB/1GB = 20 container which it is executing with -Xmx1800 >> >> Nodes of kind 2: >> 60GB/1GB = 60 container which it is executing with -Xmx5260 >> >> >> Thanks! >> >> >> > --001a1146f784f12f430517fcce26 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
It seems the parameter "mapreduce.map.memory.mb" is parsed from client.

2015-06-07 15:05 GMT+08:0= 0 J. Rottinghuis <jrottinghuis@gmail.com>:
On each node you= can configure how much memory is available for containers to run.
On the other hand, for each application you can configure how large contai= ners should be. For MR apps, you can separately set mappers, reducers, and = the app master itself.

Yarn will detemine through scheduling r= ules and depending on locality where tasks are run. One app has one contain= er size (per respective category map, reduce, AM) that is not driven by nod= es. Available node memory divided by task size will determine how many task= s run on each node. There are minimum and maximum container sizes, so you c= an avoid running crazy things such as 1K 1MB containers for example.
Hope that helps,

Joep

O= n Thu, Jun 4, 2015 at 6:48 AM, paco <pacopww@gmail.com> wrot= e:

Hello,

Recently I have increased my physical cluster. I have two kind of nodes:
Type 1:
=C2=A0 =C2=A0 RAM: 24 GB
=C2=A0 =C2=A0 12 cores

Type 2:
=C2=A0 =C2=A0 RAM: 64 GB
=C2=A0 =C2=A0 12 cores

Theses nodes are in the same physical rack. I would like to configure it to use 12 container per node, in nodes of type 1 each mapper has 1.8GB
(22GB / 12 cores =3D 1.8GB), in nodes of kind 2 each mapper will has 5.3GB<= br> (60/12). Is it possible?

I have configured so:

nodes type 1(slaves):
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <value>22000&= lt;/value>
</property>

<property>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <name>mapredu= ce.map.memory.mb</name>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <value>1800&l= t;/value>
</property>
=C2=A0 =C2=A0 <property>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <name>mapred.map.child.java.opts</name= >
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <value>-Xmx1800m</value>
=C2=A0 =C2=A0 </property>



nodes type 2(slaves):
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <value>60000&= lt;/value>
</property>

<property>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <name>mapredu= ce.map.memory.mb</name>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <value>5260&l= t;/value>
</property>
=C2=A0 =C2=A0 <property>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <name>mapred.map.child.java.opts</name= >
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <value>-Xmx5260m</value>
=C2=A0 =C2=A0 </property>



Hadoop is creating mapper with 1 GB of memory like:

Nodes of kind 1:
20GB/1GB =3D 20 container which it is executing with -Xmx1800

Nodes of kind 2:
60GB/1GB =3D 60 container which it is executing with -Xmx5260


Thanks!




--001a1146f784f12f430517fcce26--