Return-Path: Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: (qmail 67097 invoked from network); 24 Nov 2010 06:27:37 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 24 Nov 2010 06:27:37 -0000 Received: (qmail 14459 invoked by uid 500); 24 Nov 2010 06:28:06 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 14412 invoked by uid 500); 24 Nov 2010 06:28:06 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 14300 invoked by uid 99); 24 Nov 2010 06:28:06 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 24 Nov 2010 06:28:06 +0000 X-ASF-Spam-Status: No, hits=2.9 required=10.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [209.85.214.176] (HELO mail-iw0-f176.google.com) (209.85.214.176) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 24 Nov 2010 06:28:01 +0000 Received: by iwn5 with SMTP id 5so676376iwn.35 for ; Tue, 23 Nov 2010 22:27:40 -0800 (PST) Received: by 10.231.32.69 with SMTP id b5mr1830232ibd.68.1290580059642; Tue, 23 Nov 2010 22:27:39 -0800 (PST) MIME-Version: 1.0 Received: by 10.231.154.18 with HTTP; Tue, 23 Nov 2010 22:27:19 -0800 (PST) In-Reply-To: References: <914349.57998.qm@web39708.mail.mud.yahoo.com> From: Todd Lipcon Date: Wed, 24 Nov 2010 01:27:19 -0500 Message-ID: Subject: Re: Is there a single command to start the whole cluster in CDH3 ? To: common-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=0022152d7c8f025d780495c69687 --0022152d7c8f025d780495c69687 Content-Type: text/plain; charset=ISO-8859-1 Hi everyone, Since this question is CDH-specific, it's better to ask on the cdh-user mailing list: https://groups.google.com/a/cloudera.org/group/cdh-user/topics?pli=1 Thanks -Todd On Wed, Nov 24, 2010 at 1:26 AM, Hari Sreekumar wrote: > Hi Raul, > > I am not sure about CDH, but I have created a separate "hadoop" > user to run my ASF hadoop version, and it works fine. Maybe you can also > try > creating a new hadoop user, make hadoop the owner of hadoop root directory. > > HTH, > Hari > > On Wed, Nov 24, 2010 at 11:51 AM, rahul patodi >wrote: > > > hi Ricky, > > for installing CDH3 you can refer this tutorial: > > > > > http://cloudera-tutorial.blogspot.com/2010/11/running-cloudera-in-distributed-mode.html > > all the steps in this tutorial are well tested.(*in case of any query > > please > > leave a comment*) > > > > > > On Wed, Nov 24, 2010 at 11:48 AM, rahul patodi > >wrote: > > > > > hi Hary, > > > when i try to start hadoop daemons by /usr/lib/hadoop# bin/start-dfs.sh > > on > > > name node it is giving this error:"*May not run daemons as root. Please > > > specify HADOOP_NAMENODE_USER"(*same for other daemons*)* > > > but when i try to start it using "*/etc/init.d/hadoop-0.20-namenode > > start" > > > *it* *gets start successfully* ** > > > * > > > *whats the reason behind that? > > > * > > > > > > On Wed, Nov 24, 2010 at 10:04 AM, Hari Sreekumar < > > hsreekumar@clickable.com > > > > wrote: > > > > > >> Hi Ricky, > > >> > > >> Yes, that's how it is meant to be. The machine where you run > > >> start-dfs.sh will become the namenode, and the machine whihc you > specify > > >> in > > >> you "masters" file becomes the secondary namenode. > > >> > > >> Hari > > >> > > >> On Wed, Nov 24, 2010 at 2:13 AM, Ricky Ho > > wrote: > > >> > > >> > Thanks for pointing me to the right command. I am using the CDH3 > > >> > distribution. > > >> > I figure out no matter what I put in the masters file, it always > start > > >> the > > >> > NamedNode at the machine where I issue the "start-all.sh" command. > > And > > >> > always > > >> > start a SecondaryNamedNode in all other machines. Any clue ? > > >> > > > >> > > > >> > Rgds, > > >> > Ricky > > >> > > > >> > -----Original Message----- > > >> > From: Hari Sreekumar [mailto:hsreekumar@clickable.com] > > >> > Sent: Tuesday, November 23, 2010 10:25 AM > > >> > To: common-user@hadoop.apache.org > > >> > Subject: Re: Is there a single command to start the whole cluster in > > >> CDH3 ? > > >> > > > >> > Hi Ricky, > > >> > > > >> > Which hadoop version are you using? I am using hadoop-0.20.2 > > >> apache > > >> > version, and I generally just run the $HADOOP_HOME/bin/start-dfs.sh > > and > > >> > start-mapred.sh script on my master node. If passwordless ssh is > > >> > configured, > > >> > this script will start the required services on each node. You > > shouldn't > > >> > have to start the services on each node individually. The secondary > > >> > namenode > > >> > is specified in the conf/masters file. The node where you call the > > >> > start-*.sh script becomes the namenode(for start-dfs) or > > jobtracker(for > > >> > start-mapred). The node mentioned in the masters file becomes the > > 2ndary > > >> > namenode, and the datanodes and tasktrackers are the nodes which are > > >> > mentioned in the slaves file. > > >> > > > >> > HTH, > > >> > Hari > > >> > > > >> > On Tue, Nov 23, 2010 at 11:43 PM, Ricky Ho > > >> wrote: > > >> > > > >> > > I setup the cluster configuration in "masters", "slaves", > > >> > "core-site.xml", > > >> > > "hdfs-site.xml", "mapred-site.xml" and copy to all the machines. > > >> > > > > >> > > And I login to one of the machines and use the following to start > > the > > >> > > cluster. > > >> > > for service in /etc/init.d/hadoop-0.20-*; do sudo $service start; > > done > > >> > > > > >> > > I expect this command will SSH to all the other machines (based on > > the > > >> > > "master" > > >> > > and "slaves" files) to start the corresponding daemons, but > > obviously > > >> it > > >> > is > > >> > > not > > >> > > doing that in my setup. > > >> > > > > >> > > Am I missing something in my setup ? > > >> > > > > >> > > Also, where do I specify where the Secondary Name Node is run. > > >> > > > > >> > > Rgds, > > >> > > Ricky > > >> > > > > >> > > > > >> > > > > >> > > > > >> > > > > >> > > > >> > > > >> > > > >> > > > >> > > > > > > > > > > > > -- > > > -Thanks and Regards, > > > Rahul Patodi > > > Associate Software Engineer, > > > Impetus Infotech (India) Private Limited, > > > www.impetus.com > > > Mob:09907074413 > > > > > > > > > > > > -- > > -Thanks and Regards, > > Rahul Patodi > > Associate Software Engineer, > > Impetus Infotech (India) Private Limited, > > www.impetus.com > > Mob:09907074413 > > > -- Todd Lipcon Software Engineer, Cloudera --0022152d7c8f025d780495c69687--