hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From agc studio <agron.develo...@gmail.com>
Subject hadoop cluster container memory limit
Date Fri, 14 Oct 2016 04:07:15 GMT
Hi all,

I am running a EMR cluster with 1 master node and 10 core nodes.

When I go to the dashboard of the hadoop cluster, I each container only has
11.25 GB memory available where as the instance that I use for
it(r3.xlarge) has 30.5 GB of memory.

may I ask, how is this possible and why? Also is it possible to fully
utilise these resources.
I am able to change the settings to utilise the 11.25 GB available memory
but I am wondering about the remainder of the 30.5GB that r3.xlarge offers?
------------------------------
HEAP=9216
-Dmapred.child.java.opts=-Xmx${HEAP}m \
-Dmapred.job.map.memory.mb=${HEAP} \
-Dyarn.app.mapreduce.am.resource.mb=1024 \
-Dmapred.cluster.map.memory.mb=${HEAP} \
------------------------------
Please see the link of the cluster screenshot. http://imgur.com/a/zFvyw

Mime
View raw message