hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Lucene-hadoop Wiki] Update of "GettingStartedWithHadoop" by mahadevkonar
Date Thu, 24 Aug 2006 02:24:32 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for change notification.

The following page has been changed by mahadevkonar:
http://wiki.apache.org/lucene-hadoop/GettingStartedWithHadoop

------------------------------------------------------------------------------
  = Setting Up a Cluster using Hadoop scripts =
  This section explains how to set up a Hadoop cluster running Hadoop DFS and Hadoop Mapreduce.
The startup scripts are in hadoop/bin. The file that contains all the slave nodes that would
join the DFS and map reduce cluster is the slaves file in hadoop/conf. Edit the slaves file
to add nodes to your cluster. You need to edit the slaves file only on the machines you plan
to run the Jobtracker and Namenode. Next edit the file hadoop-env.sh in the hadoop/conf directory.
Make sure JAVA_HOME is set correctly. You can change the other environment variables as per
your requirements. HADOOP_HOME is automatically determined depending on where you run your
hadoop scripts from.
  
- == Starting up DFS ==
+ == Starting up Hadoop ==
  === Formatting the Namenode ===
   * You are required to format the Namenode for your first installation. This is true only
for your first installation. Do not format a Namenode which was already running Hadoop. It
will clear up your DFS. Run bin/hadoop namenode -format on the node you plan to run as the
Namenode.
  

Mime
View raw message