hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From hadoop hive <hadooph...@gmail.com>
Subject Re: Container is running beyond physical memory limits
Date Tue, 13 Oct 2015 20:20:18 GMT
http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/

On Wed, Oct 14, 2015 at 1:42 AM, Mich Talebzadeh <mich@peridale.co.uk>
wrote:

> Thank you all.
>
>
>
> Hi Gopal,
>
>
>
> My understanding is that the parameter below specifies the max size of 4GB
> for each contain. That seems to work for me
>
>
>
> <property>
>
> <name>mapreduce.map.memory.mb</name>
>
> <value>4096</value>
>
> </property>
>
>
>
> Now I am rather confused about the following parameters (for example
> mapreduce.reduce versus mapreduce.map) and their correlation to each other
>
>
>
>
>
> <property>
>
> <name>mapreduce.reduce.memory.mb</name>
>
> <value>8192</value>
>
> </property>
>
>
>
> <property>
>
> <name>mapreduce.map.java.opts</name>
>
> <value>-Xmx3072m</value>
>
> </property>
>
>
>
> <property>
>
> <name>mapreduce.reduce.java.opts</name>
>
> <value>-Xmx6144m</value>
>
> </property>
>
>
>
> Can you please verify if these settings are correct and how they relate to
> each other?
>
>
>
> Thanks
>
>
>
>
>
> Mich Talebzadeh
>
>
>
> Sybase ASE 15 Gold Medal Award 2008
>
> A Winning Strategy: Running the most Critical Financial Data on ASE 15
>
>
> http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf
>
> Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE
> 15", ISBN 978-0-9563693-0-7.
>
> co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
> 978-0-9759693-0-4
>
> Publications due shortly:
>
> Complex Event Processing in Heterogeneous Environments, ISBN:
> 978-0-9563693-3-8
>
> Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
> one out shortly
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Technology
> Ltd, its subsidiaries or their employees, unless expressly so stated. It is
> the responsibility of the recipient to ensure that this email is virus
> free, therefore neither Peridale Ltd, its subsidiaries nor their employees
> accept any responsibility.
>
>
>
> -----Original Message-----
> From: Gopal Vijayaraghavan [mailto:gopal@hortonworks.com] On Behalf Of
> Gopal Vijayaraghavan
> Sent: 13 October 2015 20:55
> To: user@hive.apache.org
> Cc: Mich Talebzadeh <mich@peridale.co.uk>
> Subject: Re: Container is running beyond physical memory limits
>
>
>
>
>
>
>
> > is running beyond physical memory limits. Current usage: 2.0 GB of 2
>
> >GB physical memory used; 6.6 GB of 8 GB virtual memory used. Killing
>
> >container.
>
>
>
> You need to change the yarn.nodemanager.vmem-check-enabled=false on
>
> *every* machine on your cluster & restart all NodeManagers.
>
>
>
> The VMEM check made a lot of sense in the 32 bit days when the CPU forced
> a maximum of 4Gb of VMEM per process (even with PAE).
>
>
>
> Similarly it was a way to punish processes which swap out to disk, since
> the pmem only tracks the actual RSS.
>
>
>
> In the large RAM 64bit world, vmem is not a significant issue yet - I
> think the addressing limit is 128 TB per process.
>
>
>
> > <property>
>
> > <name>mapreduce.reduce.memory.mb</name>
>
> > <value>4096</value>
>
> > </property>
>
> ...
>
> > <property>
>
> > <name>mapreduce.reduce.java.opts</name>
>
> > <value>-Xmx6144m</value>
>
> > </property>
>
>
>
> That's the next failure point. 4Gb container with 6Gb limits. To produce
> an immediate failure when checking configs, add
>
>
>
> -XX:+AlwaysPreTouch -XX:+UseNUMA
>
>
>
> to the java.opts.
>
>
>
> Cheers,
>
> Gopal
>
>

Mime
View raw message