hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From George Liaw <george.a.l...@gmail.com>
Subject Hadoop 2.7.2 Yarn Memory Utiliziation
Date Fri, 07 Oct 2016 19:31:48 GMT
Hi,
I'm playing around with setting up a Hadoop 2.7.2 cluster with some configs
that I inherited and I'm running into an issue where YARN isn't utilizing
all of the memory available on the nodemanagers and seems to be limited to
2-3 map tasks for an application for some reason. Can anyone shed some
light on this or let me know what else I should look into?

Attached screenshots below of what I'm seeing.

Relevant configs that I'm aware of:
*yarn-site.xml:*

<property>
  <name>yarn.scheduler.minimum-allocation-mb</name>
  <value>128</value>
</property>

<property>
  <name>yarn.scheduler.maximum-allocation-mb</name>
  <value>4096</value>
</property>

*mapred-site.xml:*

<property>
  <name>mapreduce.map.memory.mb</name>
  <value>4096</value>
</property>

<property>
  <name>mapreduce.reduce.memory.mb</name>
  <value>4096</value>
</property>

[image: Inline image 5][image: Inline image 4][image: Inline image 6]

Thanks,
George

-- 
George A. Liaw

Mime
View raw message