bigtop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David Fryer <dfryer1...@gmail.com>
Subject Re: Starting Hadoop in Distributed Mode
Date Mon, 21 Jul 2014 17:15:02 GMT
I activated the bigtop yum repository, and installed the required hadoop
packages via yum. All of the computers in the cluster are running CentOS
6.5.

-David Fryer


On Mon, Jul 21, 2014 at 1:01 PM, Konstantin Boudnik <cos@apache.org> wrote:

> I see that your daemon is trying to log to
> /usr/lib/hadoop/logs whereas Bigtop logs under /car/log as required by
> Linux services good behavior rules.
>
> The way namenode recognizes DNs isn't via slaves file, but by DNs register
> with NN via RPC mechanism.
>
> How did you install the Hadoop? Using Bigtop packages or via a different
> mechanism? The fact that you are seeing error message about cygwin not
> found tells me that you are using a derivative bits, not pure Bigtop. Is
> this the case?
>
> Regards
>   Cos
>
> On July 21, 2014 9:32:48 AM PDT, David Fryer <dfryer1193@gmail.com> wrote:
> >When I tried starting hadoop using the init scripts provided, the
> >master
> >couldn't find any of the datanodes. It is my understanding that the
> >masters
> >file is optional, but the slaves file is required. The scripts that
> >reference the slaves file are named in plural (instead of
> >hadoop-daemon.sh,
> >use hadoop-daemons.sh). I tried modifying the init scripts to run
> >hadoop-daemons.sh, and the script attempted to spawn processes on the
> >slaves referenced in the slaves file, but that produced the error:
> >Starting Hadoop namenode:                                  [  OK  ]
> >slave2: starting namenode, logging to
> >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-loki.out
> >master: starting namenode, logging to
> >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-odin.out
> >slave3: starting namenode, logging to
> >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-tyr.out
> >slave1: starting namenode, logging to
> >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-thor.out
> >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> >directory
> >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> >found
> >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> >directory
> >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> >found
> >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> >directory
> >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> >found
> >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
> >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
> >directory
> >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command not
> >found
> >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
> >
> >-David Fryer
> >
> >
> >On Mon, Jul 21, 2014 at 12:18 PM, Konstantin Boudnik <cos@apache.org>
> >wrote:
> >
> >> Hi David.
> >>
> >> Slaves files are really optional if I remember right. In Bigtop we
> >are
> >> usually
> >> deploy Hadoop with provided Puppet recipes which are battle-hardened
> >over
> >> the
> >> years :)
> >>
> >> Cos
> >>
> >> On Mon, Jul 21, 2014 at 10:53AM, David Fryer wrote:
> >> > Hi Bigtop!
> >> >
> >> > I'm working on trying to get hadoop running in distributed mode,
> >but the
> >> > init scripts don't seem to be referencing the slaves file in
> >> > /etc/hadoop/conf. Has anyone encountered this before?
> >> >
> >> > Thanks,
> >> > David Fryer
> >>
>
>

Mime
View raw message