hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Allen Wittenauer ...@yahoo-inc.com>
Subject Re: best way to set memory
Date Thu, 23 Jul 2009 20:13:37 GMT

FWIW, we actually push a completely separate config to the name node, jt,
etc, because of some of the other settings (like saves and
dfs.[in|ex]cludes).  But if you wanted to do an all-in-one, well...

Hmm.  Looking at the code, this worked differently than I always thought it
did (at least in 0.18).  Like Amogh, I thought that HADOOP_NAMENODE_OPTS (or
at least HADOOP_NAMENODE_HEAPSIZE) would override, but that clearly isn't
the case.

I've filed HADOOP-6168 and appropriately bonked some of my local Hadoop
committers on the head. :)

On 7/22/09 7:50 AM, "Fernando Padilla" <fern@alum.mit.edu> wrote:

> But right now the script forcefully adds and extra -Xmx1000m even if you
> don't want it..
> 
> I guess I'll be submitting a patch for hadoop-daemon.sh later. :) :)
> 
> thank you all
> 
> 
> On 7/22/09 2:25 AM, Amogh Vasekar wrote:
>> I haven't played a lot with it, but you may want to check if setting
>> HADOOP_NAMENODE_OPTS, HADOOP_TASKTRACKER_OPTS help. Let me know if you find a
>> way to do this :)
>> 
>> Cheers!
>> Amogh
>> 
>> -----Original Message-----
>> From: Fernando Padilla [mailto:fern@alum.mit.edu]
>> Sent: Wednesday, July 22, 2009 9:47 AM
>> To: common-user@hadoop.apache.org
>> Subject: Re: best way to set memory
>> 
>> I was thinking not for M/R, but for the actual daemons:
>> 
>> When I go and start up a daemon (like below).  They all use the same
>> hadoop-env.sh.  Which allows you to only set the HADOOP_HEAPSIZE once..
>> not differently for each daemon-type..
>> 
>> bin/hadoop-daemon.sh start namenode
>> bin/hadoop-daemon.sh start datanode
>> bin/hadoop-daemon.sh start secondarynamenode
>> bin/hadoop-daemon.sh start jobtracker
>> bin/hadoop-daemon.sh start tasktracker
>> 
>> 
>> 
>> Amogh Vasekar wrote:
>>> If you need to set the java_options for mem., you can do this via configure
>>> in your MR job.
>>> 
>>> -----Original Message-----
>>> From: Fernando Padilla [mailto:fern@alum.mit.edu]
>>> Sent: Wednesday, July 22, 2009 9:11 AM
>>> To: common-user@hadoop.apache.org
>>> Subject: best way to set memory
>>> 
>>> So.. I want to have different memory profiles for
>>> NameNode/DataNode/JobTracker/TaskTracker.
>>> 
>>> But it looks like I only have one environment variable to modify,
>>> HADOOP_HEAPSIZE, but I might be running more than one on a single
>>> box/deployment/conf directory.
>>> 
>>> Is there a proper way to set the memory for each kind of server? Or has
>>> an issue been created to document this bug/deficiency??


Mime
View raw message