hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Khalil Honsali" <k.hons...@gmail.com>
Subject Re: Not able to start Data Node
Date Wed, 02 Jan 2008 06:29:44 GMT
Hi,

I think you need to post more information, for example an excerpt of the
failing datanode log. Also, please clarify the issue of connectivity:
- are you able to ssh passwordless (from master to slave, slave to master,
slave to slave, master to master), you shouldn't be entering passwrd
everytime...
- are you able to telnet (not necessary but preferred)
- have you verified the ports as RUNNING on using netstat command?

besides, the tasktracker starts ok but not the datanode?

K. Honsali

On 02/01/2008, Dhaya007 <mgdhayal@gmail.com> wrote:
>
>
> I am new to hadoop if any think wrong please correct me ....
> I Have configured a single/multi node cluster using following link
>
> http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29
> .
> I have followed the link but i am not able to start the haoop in multi
> node
> environment
> The problems i am facing are as Follows:
> 1.I have configured master and slave nodes with ssh less pharase if try to
> run the start-dfs.sh it prompt the password for master:slave machines.(I
> have copied the .ssh/id_rsa.pub key of master in to slaves autherized_key
> file)
>
> 2.After giving password datanode,namenode,jobtracker,tasktraker started
> successfully in master but datanode is started in slave.
>
>
> 3.Some time step 2 works and some time it says that permission denied.
>
> 4.I have checked the log file in the slave for datanode it says that
> incompatible node, then i have formated the slave, master and start the
> dfs
> by start-dfs.sh still i am getting the error
>
>
> The host entry in etc/hosts are both master/slave
> master
> slave
> conf/masters
> master
> conf/slaves
> master
> slave
>
> The hadoop-site.xml  for both master/slave
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>hadoop.tmp.dir</name>
>   <value>/home/hdusr/hadoop-${user.name}</value>
>   <description>A base for other temporary directories.</description>
> </property>
>
> <property>
>   <name>fs.default.name</name>
>   <value>hdfs://master:54310</value>
>   <description>The name of the default file system.  A URI whose
>   scheme and authority determine the FileSystem implementation.  The
>   uri's scheme determines the config property (fs.SCHEME.impl) naming
>   the FileSystem implementation class.  The uri's authority is used to
>   determine the host, port, etc. for a filesystem.</description>
> </property>
>
> <property>
>   <name>mapred.job.tracker</name>
>   <value>master:54311</value>
>   <description>The host and port that the MapReduce job tracker runs
>   at.  If "local", then jobs are run in-process as a single map
>   and reduce task.
>   </description>
> </property>
>
> <property>
>   <name>dfs.replication</name>
>   <value>2</value>
>   <description>Default block replication.
>   The actual number of replications can be specified when the file is
> created.
>   The default is used if replication is not specified in create time.
>   </description>
> </property>
>
> <property>
>   <name>mapred.map.tasks</name>
>   <value>20</value>
>   <description>As a rule of thumb, use 10x the number of slaves (i.e.,
> number of tasktrackers).
>   </description>
> </property>
>
> <property>
>   <name>mapred.reduce.tasks</name>
>   <value>4</value>
>   <description>As a rule of thumb, use 2x the number of slave processors
> (i.e., number of tasktrackers).
>   </description>
> </property>
> </configuration>
>
> Please help me to reslove the same. Or else provide any other tutorial for
> multi node cluster setup.I am egarly waiting for the tutorials.
>
>
> Thanks
>
> --
> View this message in context:
> http://www.nabble.com/Not-able-to-start-Data-Node-tp14573889p14573889.html
> Sent from the Hadoop Users mailing list archive at Nabble.com.
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message