hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Zimmerman <azimmer...@sproutsocial.com>
Subject [No Subject]
Date Fri, 21 Feb 2014 14:05:39 GMT
The worker nodes on my version 2.2 cluster won't use more than 11 of the 30
total (24
allocated) for mapreduce jobs running in Yarn.   Does anyone have an idea
what might be constraining the usage of Ram?

I followed the steps listed here:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html
,
and http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/.
to set various memory configuration, but no matter what I try, the nodes on
the cluster don't use more than 11GB of the allocated 26GB.

The yarn resource manager reports that it is using all of the allocated
memory in the status across the top, but according to TOP and other such,
it is not.

I see org.apache.hadoop.mapred.YarnChild processes being created with
-Xmx756m, but I can't find this anywhere in mapreduce or yarn
configurations.

yarn.nodemanager.resource.memory-mb = 24576
yarn.scheduler.minimum-allocation-mb = 3072
yarn_heapsize=20000 (not really clear to me what this does...?)
mapreduce2 config:
mapreduce.map.memory.mb = 4096
mapreduce.reduce.memory.mb = 8192
mapreduce.map.java.opts = -Xmx3500
mapreduce.reduce.java.opts = -Xmx7000

Thanks!

Aaron Zimmerman

Mime
View raw message