hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From a.@apache.org
Subject git commit: MAPREDUCE-6072. Remove INSTALL document (Akira AJISAKA via aw)
Date Mon, 29 Sep 2014 15:31:18 GMT
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 0cfff28f5 -> c65ef9ca1


MAPREDUCE-6072. Remove INSTALL document (Akira AJISAKA via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c65ef9ca
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c65ef9ca
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c65ef9ca

Branch: refs/heads/branch-2
Commit: c65ef9ca1e93b05a764c4ff774f5ec9ae8ddcb3a
Parents: 0cfff28
Author: Allen Wittenauer <aw@apache.org>
Authored: Mon Sep 29 08:31:10 2014 -0700
Committer: Allen Wittenauer <aw@apache.org>
Committed: Mon Sep 29 08:31:10 2014 -0700

----------------------------------------------------------------------
 BUILDING.txt                         | 11 +++++
 hadoop-mapreduce-project/CHANGES.txt |  2 +
 hadoop-mapreduce-project/INSTALL     | 70 -------------------------------
 3 files changed, 13 insertions(+), 70 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c65ef9ca/BUILDING.txt
----------------------------------------------------------------------
diff --git a/BUILDING.txt b/BUILDING.txt
index 3940a98..bbad5ef 100644
--- a/BUILDING.txt
+++ b/BUILDING.txt
@@ -171,6 +171,17 @@ Create a local staging version of the website (in /tmp/hadoop-site)
   $ mvn clean site; mvn site:stage -DstagingDirectory=/tmp/hadoop-site
 
 ----------------------------------------------------------------------------------
+Installing Hadoop
+
+Look for these HTML files after you build the document by the above commands.
+
+  * Single Node Setup:
+    hadoop-project-dist/hadoop-common/SingleCluster.html
+
+  * Cluster Setup:
+    hadoop-project-dist/hadoop-common/ClusterSetup.html
+
+----------------------------------------------------------------------------------
 
 Handling out of memory errors in builds
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c65ef9ca/hadoop-mapreduce-project/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/CHANGES.txt b/hadoop-mapreduce-project/CHANGES.txt
index 872e58a..23d991f 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -41,6 +41,8 @@ Release 2.6.0 - UNRELEASED
     scheduler resource type is memory plus cpu. (Peng Zhang and Varun Vasudev
     via zjshen)
 
+    MAPREDUCE-6072. Remove INSTALL document (Akira AJISAKA via aw)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c65ef9ca/hadoop-mapreduce-project/INSTALL
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/INSTALL b/hadoop-mapreduce-project/INSTALL
deleted file mode 100644
index f52fe20..0000000
--- a/hadoop-mapreduce-project/INSTALL
+++ /dev/null
@@ -1,70 +0,0 @@
-To compile  Hadoop Mapreduce next following, do the following:
-
-Step 1) Install dependencies for yarn
-
-See http://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-mapreduce-porject/hadoop-yarn/README
-Make sure protbuf library is in your library path or set: export LD_LIBRARY_PATH=/usr/local/lib
-
-Step 2) Checkout
-
-svn checkout http://svn.apache.org/repos/asf/hadoop/common/trunk
-
-Step 3) Build
-
-Go to common directory - choose your regular common build command. For example:
-
-export MAVEN_OPTS=-Xmx512m
-mvn clean package -Pdist -Dtar -DskipTests -Pnative
-
-You can omit -Pnative it you don't want to build native packages.
-
-Step 4) Untar the tarball from hadoop-dist/target/ into a clean and different
-directory, say HADOOP_YARN_HOME.
-
-Step 5)
-Start hdfs
-
-To run Hadoop Mapreduce next applications: 
-
-Step 6) export the following variables to where you have things installed:
-You probably want to export these in hadoop-env.sh and yarn-env.sh also.
-
-export HADOOP_MAPRED_HOME=<mapred loc>
-export HADOOP_COMMON_HOME=<common loc>
-export HADOOP_HDFS_HOME=<hdfs loc>
-export HADOOP_YARN_HOME=directory where you untarred yarn
-export HADOOP_CONF_DIR=<conf loc>
-export YARN_CONF_DIR=$HADOOP_CONF_DIR
-
-Step 7) Setup config: for running mapreduce applications, which now are in user land, you
need to setup nodemanager with the following configuration in your yarn-site.xml before you
start the nodemanager.
-    <property>
-      <name>yarn.nodemanager.aux-services</name>
-      <value>mapreduce_shuffle</value>
-    </property>
-
-    <property>
-      <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
-      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
-    </property>
-
-Step 8) Modify mapred-site.xml to use yarn framework
-    <property>    
-      <name> mapreduce.framework.name</name>
-      <value>yarn</value>  
-    </property>
-
-Step 9) cd $HADOOP_YARN_HOME
-
-Step 10) sbin/yarn-daemon.sh start resourcemanager
-
-Step 11) sbin/yarn-daemon.sh start nodemanager
-
-Step 12) sbin/mr-jobhistory-daemon.sh start historyserver
-
-Step 13) You are all set, an example on how to run a mapreduce job is:
-cd $HADOOP_MAPRED_HOME
-ant examples -Dresolvers=internal 
-$HADOOP_COMMON_HOME/bin/hadoop jar $HADOOP_MAPRED_HOME/build/hadoop-mapreduce-examples-*.jar
randomwriter -Dmapreduce.job.user.name=$USER -Dmapreduce.randomwriter.bytespermap=10000 -Ddfs.blocksize=536870912
-Ddfs.block.size=536870912 -libjars $HADOOP_YARN_HOME/modules/hadoop-mapreduce-client-jobclient-*.jar
output 
-
-The output on the command line should be almost similar to what you see in the JT/TT setup
(Hadoop 0.20/0.21)
-


Mime
View raw message