bigtop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jay vyas <jayunit100.apa...@gmail.com>
Subject Re: Starting Hadoop in Distributed Mode
Date Mon, 21 Jul 2014 22:44:27 GMT
I suggest using puppet as well, way easier than doing it manually.

basically i think you could

- clone down thebigtop github and checkout branch-0.7.0
- put those puppet recipes on your bare metal nodes, and updeate the config
csv file to point to ip of master
- run puppet apply on each node

thats it.  It should all i think just work automagically.
right?



On Mon, Jul 21, 2014 at 2:44 PM, David Fryer <dfryer1193@gmail.com> wrote:

> Yes, Bigtop 0.7.0 is installed.
>
> -David Fryer
>
>
> On Mon, Jul 21, 2014 at 2:33 PM, Konstantin Boudnik <cos@apache.org>
> wrote:
>
>> Sorry for being a nag - did you install Bigtop 0.7.0?
>>
>> Cc'ing dev@ list as well
>>   Cos
>>
>> On Mon, Jul 21, 2014 at 01:15PM, David Fryer wrote:
>> > I activated the bigtop yum repository, and installed the required hadoop
>> > packages via yum. All of the computers in the cluster are running CentOS
>> > 6.5.
>> >
>> > -David Fryer
>> >
>> >
>> > On Mon, Jul 21, 2014 at 1:01 PM, Konstantin Boudnik <cos@apache.org>
>> wrote:
>> >
>> > > I see that your daemon is trying to log to
>> > > /usr/lib/hadoop/logs whereas Bigtop logs under /car/log as required by
>> > > Linux services good behavior rules.
>> > >
>> > > The way namenode recognizes DNs isn't via slaves file, but by DNs
>> register
>> > > with NN via RPC mechanism.
>> > >
>> > > How did you install the Hadoop? Using Bigtop packages or via a
>> different
>> > > mechanism? The fact that you are seeing error message about cygwin not
>> > > found tells me that you are using a derivative bits, not pure Bigtop.
>> Is
>> > > this the case?
>> > >
>> > > Regards
>> > >   Cos
>> > >
>> > > On July 21, 2014 9:32:48 AM PDT, David Fryer <dfryer1193@gmail.com>
>> wrote:
>> > > >When I tried starting hadoop using the init scripts provided, the
>> > > >master
>> > > >couldn't find any of the datanodes. It is my understanding that the
>> > > >masters
>> > > >file is optional, but the slaves file is required. The scripts that
>> > > >reference the slaves file are named in plural (instead of
>> > > >hadoop-daemon.sh,
>> > > >use hadoop-daemons.sh). I tried modifying the init scripts to run
>> > > >hadoop-daemons.sh, and the script attempted to spawn processes on the
>> > > >slaves referenced in the slaves file, but that produced the error:
>> > > >Starting Hadoop namenode:                                  [  OK  ]
>> > > >slave2: starting namenode, logging to
>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-loki.out
>> > > >master: starting namenode, logging to
>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-odin.out
>> > > >slave3: starting namenode, logging to
>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-tyr.out
>> > > >slave1: starting namenode, logging to
>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-thor.out
>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>> > > >directory
>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
>> > > >found
>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>> > > >directory
>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
>> > > >found
>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>> > > >directory
>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
>> > > >found
>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>> > > >directory
>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
>> > > >found
>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>> > > >
>> > > >-David Fryer
>> > > >
>> > > >
>> > > >On Mon, Jul 21, 2014 at 12:18 PM, Konstantin Boudnik <cos@apache.org
>> >
>> > > >wrote:
>> > > >
>> > > >> Hi David.
>> > > >>
>> > > >> Slaves files are really optional if I remember right. In Bigtop
we
>> > > >are
>> > > >> usually
>> > > >> deploy Hadoop with provided Puppet recipes which are
>> battle-hardened
>> > > >over
>> > > >> the
>> > > >> years :)
>> > > >>
>> > > >> Cos
>> > > >>
>> > > >> On Mon, Jul 21, 2014 at 10:53AM, David Fryer wrote:
>> > > >> > Hi Bigtop!
>> > > >> >
>> > > >> > I'm working on trying to get hadoop running in distributed
mode,
>> > > >but the
>> > > >> > init scripts don't seem to be referencing the slaves file
in
>> > > >> > /etc/hadoop/conf. Has anyone encountered this before?
>> > > >> >
>> > > >> > Thanks,
>> > > >> > David Fryer
>> > > >>
>> > >
>> > >
>>
>
>


-- 
jay vyas

Mime
View raw message