hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Todd Lipcon <t...@cloudera.com>
Subject Re: Is there a single command to start the whole cluster in CDH3 ?
Date Wed, 24 Nov 2010 06:27:19 GMT
Hi everyone,

Since this question is CDH-specific, it's better to ask on the cdh-user
mailing list:
https://groups.google.com/a/cloudera.org/group/cdh-user/topics?pli=1

Thanks
-Todd

On Wed, Nov 24, 2010 at 1:26 AM, Hari Sreekumar <hsreekumar@clickable.com>wrote:

> Hi Raul,
>
>          I am not sure about CDH, but I have created a separate "hadoop"
> user to run my ASF hadoop version, and it works fine. Maybe you can also
> try
> creating a new hadoop user, make hadoop the owner of hadoop root directory.
>
> HTH,
> Hari
>
> On Wed, Nov 24, 2010 at 11:51 AM, rahul patodi <patodirahul@gmail.com
> >wrote:
>
> > hi Ricky,
> > for installing CDH3 you can refer this tutorial:
> >
> >
> http://cloudera-tutorial.blogspot.com/2010/11/running-cloudera-in-distributed-mode.html
> > all the steps in this tutorial are well tested.(*in case of any query
> > please
> > leave a comment*)
> >
> >
> > On Wed, Nov 24, 2010 at 11:48 AM, rahul patodi <patodirahul@gmail.com
> > >wrote:
> >
> > > hi Hary,
> > > when i try to start hadoop daemons by /usr/lib/hadoop# bin/start-dfs.sh
> > on
> > > name node it is giving this error:"*May not run daemons as root. Please
> > > specify HADOOP_NAMENODE_USER"(*same for other daemons*)*
> > > but when i try to start it using "*/etc/init.d/hadoop-0.20-namenode
> > start"
> > > *it* *gets start successfully* **
> > > *
> > > *whats the reason behind that?
> > > *
> > >
> > > On Wed, Nov 24, 2010 at 10:04 AM, Hari Sreekumar <
> > hsreekumar@clickable.com
> > > > wrote:
> > >
> > >> Hi Ricky,
> > >>
> > >>      Yes, that's how it is meant to be. The machine where you run
> > >> start-dfs.sh will become the namenode, and the machine whihc you
> specify
> > >> in
> > >> you "masters" file becomes the secondary namenode.
> > >>
> > >> Hari
> > >>
> > >> On Wed, Nov 24, 2010 at 2:13 AM, Ricky Ho <rickyphyllis@yahoo.com>
> > wrote:
> > >>
> > >> > Thanks for pointing me to the right command.  I am using the CDH3
> > >> > distribution.
> > >> > I figure out no matter what I put in the masters file, it always
> start
> > >> the
> > >> > NamedNode at the machine where I issue the "start-all.sh" command.
> >  And
> > >> > always
> > >> > start a SecondaryNamedNode in all other machines.  Any clue ?
> > >> >
> > >> >
> > >> > Rgds,
> > >> > Ricky
> > >> >
> > >> > -----Original Message-----
> > >> > From: Hari Sreekumar [mailto:hsreekumar@clickable.com]
> > >> > Sent: Tuesday, November 23, 2010 10:25 AM
> > >> > To: common-user@hadoop.apache.org
> > >> > Subject: Re: Is there a single command to start the whole cluster
in
> > >> CDH3 ?
> > >> >
> > >> > Hi Ricky,
> > >> >
> > >> >         Which hadoop version are you using? I am using hadoop-0.20.2
> > >> apache
> > >> > version, and I generally just run the $HADOOP_HOME/bin/start-dfs.sh
> > and
> > >> > start-mapred.sh script on my master node. If passwordless ssh is
> > >> > configured,
> > >> > this script will start the required services on each node. You
> > shouldn't
> > >> > have to start the services on each node individually. The secondary
> > >> > namenode
> > >> > is specified in the conf/masters file. The node where you call the
> > >> > start-*.sh script becomes the namenode(for start-dfs) or
> > jobtracker(for
> > >> > start-mapred). The node mentioned in the masters file becomes the
> > 2ndary
> > >> > namenode, and the datanodes and tasktrackers are the nodes which are
> > >> > mentioned in the slaves file.
> > >> >
> > >> > HTH,
> > >> > Hari
> > >> >
> > >> > On Tue, Nov 23, 2010 at 11:43 PM, Ricky Ho <rickyphyllis@yahoo.com>
> > >> wrote:
> > >> >
> > >> > > I setup the cluster configuration in "masters", "slaves",
> > >> > "core-site.xml",
> > >> > > "hdfs-site.xml", "mapred-site.xml" and copy to all the machines.
> > >> > >
> > >> > > And I login to one of the machines and use the following to start
> > the
> > >> > > cluster.
> > >> > > for service in /etc/init.d/hadoop-0.20-*; do sudo $service start;
> > done
> > >> > >
> > >> > > I expect this command will SSH to all the other machines (based
on
> > the
> > >> > > "master"
> > >> > > and "slaves" files) to start the corresponding daemons, but
> > obviously
> > >> it
> > >> > is
> > >> > > not
> > >> > > doing that in my setup.
> > >> > >
> > >> > > Am I missing something in my setup ?
> > >> > >
> > >> > > Also, where do I specify where the Secondary Name Node is run.
> > >> > >
> > >> > > Rgds,
> > >> > > Ricky
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> >
> > >> >
> > >> >
> > >> >
> > >>
> > >
> > >
> > >
> > > --
> > > -Thanks and Regards,
> > > Rahul Patodi
> > > Associate Software Engineer,
> > > Impetus Infotech (India) Private Limited,
> > > www.impetus.com
> > > Mob:09907074413
> > >
> > >
> >
> >
> > --
> > -Thanks and Regards,
> > Rahul Patodi
> > Associate Software Engineer,
> > Impetus Infotech (India) Private Limited,
> > www.impetus.com
> > Mob:09907074413
> >
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message