hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ravi Prakash <ravihad...@gmail.com>
Subject Re: hadoop cluster container memory limit
Date Fri, 14 Oct 2016 18:36:42 GMT
Hi!

Look at yarn.nodemanager.resource.memory-mb in
https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-common/yarn-default.xml

I'm not sure how 11.25Gb comes in. How did you deploy the cluster?

Ravi

On Thu, Oct 13, 2016 at 9:07 PM, agc studio <agron.developer@gmail.com>
wrote:

> Hi all,
>
> I am running a EMR cluster with 1 master node and 10 core nodes.
>
> When I go to the dashboard of the hadoop cluster, I each container only
> has 11.25 GB memory available where as the instance that I use for
> it(r3.xlarge) has 30.5 GB of memory.
>
> may I ask, how is this possible and why? Also is it possible to fully
> utilise these resources.
> I am able to change the settings to utilise the 11.25 GB available memory
> but I am wondering about the remainder of the 30.5GB that r3.xlarge offers?
> ------------------------------
> HEAP=9216
> -Dmapred.child.java.opts=-Xmx${HEAP}m \
> -Dmapred.job.map.memory.mb=${HEAP} \
> -Dyarn.app.mapreduce.am.resource.mb=1024 \
> -Dmapred.cluster.map.memory.mb=${HEAP} \
> ------------------------------
> Please see the link of the cluster screenshot. http://imgur.com/a/zFvyw
>

Mime
View raw message