Return-Path: Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: (qmail 88134 invoked from network); 22 Jul 2009 14:50:12 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 22 Jul 2009 14:50:12 -0000 Received: (qmail 75359 invoked by uid 500); 22 Jul 2009 14:51:14 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 75303 invoked by uid 500); 22 Jul 2009 14:51:14 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 75293 invoked by uid 99); 22 Jul 2009 14:51:14 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 22 Jul 2009 14:51:14 +0000 X-ASF-Spam-Status: No, hits=-0.3 required=10.0 tests=RCVD_IN_DNSWL_LOW,SPF_SOFTFAIL X-Spam-Check-By: apache.org Received-SPF: softfail (athena.apache.org: transitioning domain of fern@alum.mit.edu does not designate 66.111.4.25 as permitted sender) Received: from [66.111.4.25] (HELO out1.smtp.messagingengine.com) (66.111.4.25) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 22 Jul 2009 14:51:04 +0000 Received: from compute1.internal (compute1.internal [10.202.2.41]) by out1.messagingengine.com (Postfix) with ESMTP id 6E9CA3BC95B for ; Wed, 22 Jul 2009 10:50:43 -0400 (EDT) Received: from heartbeat2.messagingengine.com ([10.202.2.161]) by compute1.internal (MEProxy); Wed, 22 Jul 2009 10:50:43 -0400 X-Sasl-enc: p9DBDISd/9UHmK5r4hlrE5Ia9HHr+s6uQVOi4Wrm0Mdn 1248274243 Received: from protrade-users-powerbook-g4-15.local (c-69-181-45-53.hsd1.ca.comcast.net [69.181.45.53]) by mail.messagingengine.com (Postfix) with ESMTPSA id E6980B2F6 for ; Wed, 22 Jul 2009 10:50:42 -0400 (EDT) Message-ID: <4A67274D.2020607@alum.mit.edu> Date: Wed, 22 Jul 2009 07:50:53 -0700 From: Fernando Padilla User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10.4; en-US; rv:1.9.1b3pre) Gecko/20090223 Thunderbird/3.0b2 MIME-Version: 1.0 To: common-user@hadoop.apache.org Subject: Re: best way to set memory References: <4A668A63.80209@alum.mit.edu> <616DA47B2EF5B944B91846785B512FF4AABFBCAC03@EGL-EX07VS01.ds.corp.yahoo.com> <4A66929F.3040009@alum.mit.edu> <616DA47B2EF5B944B91846785B512FF4AABFBCAC34@EGL-EX07VS01.ds.corp.yahoo.com> In-Reply-To: <616DA47B2EF5B944B91846785B512FF4AABFBCAC34@EGL-EX07VS01.ds.corp.yahoo.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org But right now the script forcefully adds and extra -Xmx1000m even if you don't want it.. I guess I'll be submitting a patch for hadoop-daemon.sh later. :) :) thank you all On 7/22/09 2:25 AM, Amogh Vasekar wrote: > I haven't played a lot with it, but you may want to check if setting HADOOP_NAMENODE_OPTS, HADOOP_TASKTRACKER_OPTS help. Let me know if you find a way to do this :) > > Cheers! > Amogh > > -----Original Message----- > From: Fernando Padilla [mailto:fern@alum.mit.edu] > Sent: Wednesday, July 22, 2009 9:47 AM > To: common-user@hadoop.apache.org > Subject: Re: best way to set memory > > I was thinking not for M/R, but for the actual daemons: > > When I go and start up a daemon (like below). They all use the same > hadoop-env.sh. Which allows you to only set the HADOOP_HEAPSIZE once.. > not differently for each daemon-type.. > > bin/hadoop-daemon.sh start namenode > bin/hadoop-daemon.sh start datanode > bin/hadoop-daemon.sh start secondarynamenode > bin/hadoop-daemon.sh start jobtracker > bin/hadoop-daemon.sh start tasktracker > > > > Amogh Vasekar wrote: >> If you need to set the java_options for mem., you can do this via configure in your MR job. >> >> -----Original Message----- >> From: Fernando Padilla [mailto:fern@alum.mit.edu] >> Sent: Wednesday, July 22, 2009 9:11 AM >> To: common-user@hadoop.apache.org >> Subject: best way to set memory >> >> So.. I want to have different memory profiles for >> NameNode/DataNode/JobTracker/TaskTracker. >> >> But it looks like I only have one environment variable to modify, >> HADOOP_HEAPSIZE, but I might be running more than one on a single >> box/deployment/conf directory. >> >> Is there a proper way to set the memory for each kind of server? Or has >> an issue been created to document this bug/deficiency??