hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From oz...@apache.org
Subject [1/2] hadoop git commit: Revert "HADOOP-11558. Fix dead links to doc of hadoop-tools. Contributed by Masatake Iwasaki."
Date Sun, 15 Mar 2015 05:31:01 GMT
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 815487b3c -> f9c18fd61


Revert "HADOOP-11558. Fix dead links to doc of hadoop-tools. Contributed by Masatake Iwasaki."

This reverts commit 815487b3cbf156ea3bd1b6b6959539b7b8a734ce.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e28e2e4f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e28e2e4f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e28e2e4f

Branch: refs/heads/branch-2.7
Commit: e28e2e4f5287b2b48e0fe4439ea4f924869dd113
Parents: 815487b
Author: Tsuyoshi Ozawa <ozawa@apache.org>
Authored: Sun Mar 15 14:28:17 2015 +0900
Committer: Tsuyoshi Ozawa <ozawa@apache.org>
Committed: Sun Mar 15 14:28:17 2015 +0900

----------------------------------------------------------------------
 hadoop-common-project/hadoop-common/CHANGES.txt       |  3 ---
 .../src/site/markdown/SchedulerLoadSimulator.md       |  2 +-
 .../src/site/markdown/HadoopStreaming.md.vm           | 14 +++++++-------
 3 files changed, 8 insertions(+), 11 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e28e2e4f/hadoop-common-project/hadoop-common/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt b/hadoop-common-project/hadoop-common/CHANGES.txt
index 88c2280..6747ad0 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -659,9 +659,6 @@ Release 2.7.0 - UNRELEASED
     HADOOP-11710. Make CryptoOutputStream behave like DFSOutputStream wrt
     synchronization. (Sean Busbey via yliu)
 
-    HADOOP-11558. Fix dead links to doc of hadoop-tools. (Masatake Iwasaki
-    via ozawa)
-
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e28e2e4f/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md b/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
index 2cffc86..ca179ee 100644
--- a/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
+++ b/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
@@ -43,7 +43,7 @@ The Yarn Scheduler Load Simulator (SLS) is such a tool, which can simulate
large
 o
 The simulator will exercise the real Yarn `ResourceManager` removing the network factor by
simulating `NodeManagers` and `ApplicationMasters` via handling and dispatching `NM`/`AMs`
heartbeat events from within the same JVM. To keep tracking of scheduler behavior and performance,
a scheduler wrapper will wrap the real scheduler.
 
-The size of the cluster and the application load can be loaded from configuration files,
which are generated from job history files directly by adopting [Apache Rumen](../hadoop-rumen/Rumen.html).
+The size of the cluster and the application load can be loaded from configuration files,
which are generated from job history files directly by adopting [Apache Rumen](https://hadoop.apache.org/docs/stable/rumen.html).
 
 The simulator will produce real time metrics while executing, including:
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e28e2e4f/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm b/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
index 179b1f0..7f478e2 100644
--- a/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
+++ b/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
@@ -201,7 +201,7 @@ To specify additional local temp directories use:
      -D mapred.system.dir=/tmp/system
      -D mapred.temp.dir=/tmp/temp
 
-**Note:** For more details on job configuration parameters see: [mapred-default.xml](../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml)
+**Note:** For more details on job configuration parameters see: [mapred-default.xml](./mapred-default.xml)
 
 $H4 Specifying Map-Only Jobs
 
@@ -322,7 +322,7 @@ More Usage Examples
 
 $H3 Hadoop Partitioner Class
 
-Hadoop has a library class, [KeyFieldBasedPartitioner](../api/org/apache/hadoop/mapred/lib/KeyFieldBasedPartitioner.html),
that is useful for many applications. This class allows the Map/Reduce framework to partition
the map outputs based on certain key fields, not the whole keys. For example:
+Hadoop has a library class, [KeyFieldBasedPartitioner](../../api/org/apache/hadoop/mapred/lib/KeyFieldBasedPartitioner.html),
that is useful for many applications. This class allows the Map/Reduce framework to partition
the map outputs based on certain key fields, not the whole keys. For example:
 
     hadoop jar hadoop-streaming-${project.version}.jar \
       -D stream.map.output.field.separator=. \
@@ -372,7 +372,7 @@ Sorting within each partition for the reducer(all 4 fields used for sorting)
 
 $H3 Hadoop Comparator Class
 
-Hadoop has a library class, [KeyFieldBasedComparator](../api/org/apache/hadoop/mapreduce/lib/partition/KeyFieldBasedComparator.html),
that is useful for many applications. This class provides a subset of features provided by
the Unix/GNU Sort. For example:
+Hadoop has a library class, [KeyFieldBasedComparator](../../api/org/apache/hadoop/mapreduce/lib/partition/KeyFieldBasedComparator.html),
that is useful for many applications. This class provides a subset of features provided by
the Unix/GNU Sort. For example:
 
     hadoop jar hadoop-streaming-${project.version}.jar \
       -D mapreduce.job.output.key.comparator.class=org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
\
@@ -406,7 +406,7 @@ Sorting output for the reducer (where second field used for sorting)
 
 $H3 Hadoop Aggregate Package
 
-Hadoop has a library package called [Aggregate](../api/org/apache/hadoop/mapred/lib/aggregate/package-summary.html).
Aggregate provides a special reducer class and a special combiner class, and a list of simple
aggregators that perform aggregations such as "sum", "max", "min" and so on over a sequence
of values. Aggregate allows you to define a mapper plugin class that is expected to generate
"aggregatable items" for each input key/value pair of the mappers. The combiner/reducer will
aggregate those aggregatable items by invoking the appropriate aggregators.
+Hadoop has a library package called [Aggregate](../../org/apache/hadoop/mapred/lib/aggregate/package-summary.html).
Aggregate provides a special reducer class and a special combiner class, and a list of simple
aggregators that perform aggregations such as "sum", "max", "min" and so on over a sequence
of values. Aggregate allows you to define a mapper plugin class that is expected to generate
"aggregatable items" for each input key/value pair of the mappers. The combiner/reducer will
aggregate those aggregatable items by invoking the appropriate aggregators.
 
 To use Aggregate, simply specify "-reducer aggregate":
 
@@ -441,7 +441,7 @@ The python program myAggregatorForKeyCount.py looks like:
 
 $H3 Hadoop Field Selection Class
 
-Hadoop has a library class, [FieldSelectionMapReduce](../api/org/apache/hadoop/mapred/lib/FieldSelectionMapReduce.html),
that effectively allows you to process text data like the unix "cut" utility. The map function
defined in the class treats each input key/value pair as a list of fields. You can specify
the field separator (the default is the tab character). You can select an arbitrary list of
fields as the map output key, and an arbitrary list of fields as the map output value. Similarly,
the reduce function defined in the class treats each input key/value pair as a list of fields.
You can select an arbitrary list of fields as the reduce output key, and an arbitrary list
of fields as the reduce output value. For example:
+Hadoop has a library class, [FieldSelectionMapReduce](../../api/org/apache/hadoop/mapred/lib/FieldSelectionMapReduce.html),
that effectively allows you to process text data like the unix "cut" utility. The map function
defined in the class treats each input key/value pair as a list of fields. You can specify
the field separator (the default is the tab character). You can select an arbitrary list of
fields as the map output key, and an arbitrary list of fields as the map output value. Similarly,
the reduce function defined in the class treats each input key/value pair as a list of fields.
You can select an arbitrary list of fields as the reduce output key, and an arbitrary list
of fields as the reduce output value. For example:
 
     hadoop jar hadoop-streaming-${project.version}.jar \
       -D mapreduce.map.output.key.field.separator=. \
@@ -480,7 +480,7 @@ As an example, consider the problem of zipping (compressing) a set of
files acro
 
 $H3 How many reducers should I use?
 
-See MapReduce Tutorial for details: [Reducer](../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Reducer)
+See MapReduce Tutorial for details: [Reducer](./MapReduceTutorial.html#Reducer)
 
 $H3 If I set up an alias in my shell script, will that work after -mapper?
 
@@ -556,4 +556,4 @@ A streaming process can use the stderr to emit status information. To
set a stat
 
 $H3 How do I get the Job variables in a streaming job's mapper/reducer?
 
-See [Configured Parameters](../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Configured_Parameters).
During the execution of a streaming job, the names of the "mapred" parameters are transformed.
The dots ( . ) become underscores ( \_ ). For example, mapreduce.job.id becomes mapreduce\_job\_id
and mapreduce.job.jar becomes mapreduce\_job\_jar. In your code, use the parameter names with
the underscores.
+See [Configured Parameters](./MapReduceTutorial.html#Configured_Parameters). During the execution
of a streaming job, the names of the "mapred" parameters are transformed. The dots ( . ) become
underscores ( \_ ). For example, mapreduce.job.id becomes mapreduce\_job\_id and mapreduce.job.jar
becomes mapreduce\_job\_jar. In your code, use the parameter names with the underscores.


Mime
View raw message