hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sridhar Raman" <sridhar.ra...@gmail.com>
Subject Re: User accounts in Master and Slaves
Date Wed, 23 Apr 2008 08:39:59 GMT
Ok, what about the issue regarding the users?  Do all the machines need to
be under the same user?

On Wed, Apr 23, 2008 at 12:43 PM, Harish Mallipeddi <
harish.mallipeddi@gmail.com> wrote:

> On Wed, Apr 23, 2008 at 3:03 PM, Sridhar Raman <sridhar.raman@gmail.com>
> wrote:
>
> > After trying out Hadoop in a single machine, I decided to run a
> MapReduce
> > across multiple machines.  This is the approach I followed:
> > 1 Master
> > 1 Slave
> >
> > (A doubt here:  Can my Master also be used to execute the Map/Reduce
> > functions?)
> >
>
> If you add the master node to the list of slaves (conf/slaves), then the
> master node run will also run a TaskTracker.
>
>
> >
> > To do this, I set up the masters and slaves files in the conf directory.
> > Following the instructions in this page -
> > http://hadoop.apache.org/core/docs/current/cluster_setup.html, I had set
> > up
> > sshd in both the machines, and was able to ssh from one to the other.
> >
> > I tried to run bin/start-dfs.sh.  Unfortunately, this asked for a
> password
> > for user1@slave, while in slave, there was only user2.  While in master,
> > user1 was the logged on user.  How do I resolve this?  Should the user
> > accounts be present in all the machines?  Or can I specify this
> somewhere?
> >
>
>
>
> --
> Harish Mallipeddi
> circos.com : poundbang.in/blog/
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message