hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Divij Durve <divij.t...@gmail.com>
Subject Trying to setup Cluster
Date Wed, 17 Jun 2009 16:30:57 GMT
Im trying to setup a cluster with 3 different machines running Fedora. I
cant get them to log into the localhost without the password but thats the
least of my worries at the moment.

I am posting my config files and the master and slave files let me know if
anyone can spot a problem with the configs...


Hadoop-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
    <name>dfs.data.dir</name>
        <value>$HADOOP_HOME/dfs-data</value>
            <final>true</final>
              </property>

 <property>
     <name>dfs.name.dir</name>
         <value>$HADOOP_HOME/dfs-name</value>
             <final>true</final>
               </property>


<property>
  <name>hadoop.tmp.dir</name>
    <value>$HADOOP_HOME/hadoop-tmp</value>
      <description>A base for other temporary directories.</description>
      </property>


<property>
  <name>fs.default.name</name>
    <value>hdfs://gobi.<something>.<something>:54310</value>
      <description>The name of the default file system.  A URI whose
        scheme and authority determine the FileSystem implementation.  The
          uri's scheme determines the config property (fs.SCHEME.impl)
naming
            the FileSystem implementation class.  The uri's authority is
used to
              determine the host, port, etc. for a FileSystem.</description>
              </property>

<property>
  <name>mapred.job.tracker</name>
    <value>kalahari.<something>.<something>:54311</value>
      <description>The host and port that the MapReduce job tracker runs
        at.  If "local", then jobs are run in-process as a single map
          and reduce task.
            </description>
            </property>

 <property>
     <name>mapred.system.dir</name>
         <value>$HADOOP_HOME/mapred-system</value>
             <final>true</final>
               </property>

<property>
  <name>dfs.replication</name>
    <value>1</value>
      <description>Default block replication.
        The actual number of replications can be specified when the file is
created.
          The default is used if replication is not specified in create
time.
            </description>
            </property>


<property>
  <name>mapred.local.dir</name>
    <value>$HADOOP_HOME/mapred-local</value>
  <name>dfs.replication</name>
    <value>1</value>
</property>


</configuration>


Slave:
kongur.something.something

master:
kalahari.something.something

i execute the dfs-start.sh command from gobi.something.something.

is there any other info that i should provide in order to help? Also Kongur
is where im running the data node the master file on kongur should have
localhost in it rite? thanks for the help

Divij

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message