hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amogh Vasekar <am...@yahoo-inc.com>
Subject RE: best way to set memory
Date Wed, 22 Jul 2009 09:25:39 GMT
I haven't played a lot with it, but you may want to check if setting HADOOP_NAMENODE_OPTS,
HADOOP_TASKTRACKER_OPTS help. Let me know if you find a way to do this :)


-----Original Message-----
From: Fernando Padilla [mailto:fern@alum.mit.edu] 
Sent: Wednesday, July 22, 2009 9:47 AM
To: common-user@hadoop.apache.org
Subject: Re: best way to set memory

I was thinking not for M/R, but for the actual daemons:

When I go and start up a daemon (like below).  They all use the same 
hadoop-env.sh.  Which allows you to only set the HADOOP_HEAPSIZE once.. 
not differently for each daemon-type..

bin/hadoop-daemon.sh start namenode
bin/hadoop-daemon.sh start datanode
bin/hadoop-daemon.sh start secondarynamenode
bin/hadoop-daemon.sh start jobtracker
bin/hadoop-daemon.sh start tasktracker

Amogh Vasekar wrote:
> If you need to set the java_options for mem., you can do this via configure in your MR
> -----Original Message-----
> From: Fernando Padilla [mailto:fern@alum.mit.edu] 
> Sent: Wednesday, July 22, 2009 9:11 AM
> To: common-user@hadoop.apache.org
> Subject: best way to set memory
> So.. I want to have different memory profiles for 
> NameNode/DataNode/JobTracker/TaskTracker.
> But it looks like I only have one environment variable to modify, 
> HADOOP_HEAPSIZE, but I might be running more than one on a single 
> box/deployment/conf directory.
> Is there a proper way to set the memory for each kind of server? Or has 
> an issue been created to document this bug/deficiency??

View raw message