Return-Path: Delivered-To: apmail-hadoop-core-commits-archive@www.apache.org Received: (qmail 20095 invoked from network); 15 May 2008 07:04:15 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 15 May 2008 07:04:15 -0000 Received: (qmail 24453 invoked by uid 500); 15 May 2008 07:04:16 -0000 Delivered-To: apmail-hadoop-core-commits-archive@hadoop.apache.org Received: (qmail 24430 invoked by uid 500); 15 May 2008 07:04:16 -0000 Mailing-List: contact core-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-commits@hadoop.apache.org Received: (qmail 24419 invoked by uid 99); 15 May 2008 07:04:16 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 15 May 2008 00:04:16 -0700 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 15 May 2008 07:03:30 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id DD45A2388A58; Thu, 15 May 2008 00:03:50 -0700 (PDT) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r656523 [2/4] - in /hadoop/core/branches/branch-0.17: ./ docs/ docs/skin/images/ src/docs/src/documentation/conf/ src/docs/src/documentation/content/xdocs/ Date: Thu, 15 May 2008 07:03:49 -0000 To: core-commits@hadoop.apache.org From: nigel@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20080515070350.DD45A2388A58@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Modified: hadoop/core/branches/branch-0.17/docs/mapred_tutorial.html URL: http://svn.apache.org/viewvc/hadoop/core/branches/branch-0.17/docs/mapred_tutorial.html?rev=656523&r1=656522&r2=656523&view=diff ============================================================================== --- hadoop/core/branches/branch-0.17/docs/mapred_tutorial.html (original) +++ hadoop/core/branches/branch-0.17/docs/mapred_tutorial.html Thu May 15 00:03:48 2008 @@ -150,7 +150,10 @@ Mailing Lists +
@@ -292,7 +295,7 @@ Example: WordCount v2.0 - +

Job Input

@@ -1730,7 +1735,7 @@ appropriate CompressionCodec. However, it must be noted that compressed files with the above extensions cannot be split and each compressed file is processed in its entirety by a single mapper.

- +

InputSplit

@@ -1744,7 +1749,7 @@ FileSplit is the default InputSplit. It sets map.input.file to the path of the input file for the logical split.

- +

RecordReader

@@ -1756,7 +1761,7 @@ for processing. RecordReader thus assumes the responsibility of processing record boundaries and presents the tasks with keys and values.

- +

Job Output

@@ -1781,7 +1786,7 @@

TextOutputFormat is the default OutputFormat.

- +

Task Side-Effect Files

In some applications, component tasks need to create and/or write to side-files, which differ from the actual job-output files.

@@ -1820,7 +1825,7 @@

The entire discussion holds true for maps of jobs with reducer=NONE (i.e. 0 reduces) since output of the map, in that case, goes directly to HDFS.

- +

RecordWriter

@@ -1828,9 +1833,9 @@ pairs to an output file.

RecordWriter implementations write the job outputs to the FileSystem.

-
+

Other Useful Features

- +

Counters

Counters represent global counters, defined either by @@ -1844,7 +1849,7 @@ Reporter.incrCounter(Enum, long) in the map and/or reduce methods. These counters are then globally aggregated by the framework.

- +

DistributedCache

@@ -1877,7 +1882,7 @@ DistributedCache.createSymlink(Configuration) api. Files have execution permissions set.

- +

Tool

The Tool interface supports the handling of generic Hadoop command-line options. @@ -1917,7 +1922,7 @@

- +

IsolationRunner

@@ -1941,7 +1946,7 @@

IsolationRunner will run the failed task in a single jvm, which can be in the debugger, over precisely the same input.

- +

Debugging

Map/Reduce framework provides a facility to run user-provided scripts for debugging. When map/reduce task fails, user can run @@ -1952,7 +1957,7 @@

In the following sections we discuss how to submit debug script along with the job. For submitting debug script, first it has to distributed. Then the script has to supplied in Configuration.

- +
How to distribute script file:

To distribute the debug script file, first copy the file to the dfs. @@ -1975,7 +1980,7 @@ DistributedCache.createSymLink(Configuration) api.

- +
How to submit script:

A quick way to submit debug script is to set values for the properties "mapred.map.task.debug.script" and @@ -1999,17 +2004,17 @@ $script $stdout $stderr $syslog $jobconf $program

- +
Default Behavior:

For pipes, a default script is run to process core dumps under gdb, prints stack trace and gives info about running threads.

- +

JobControl

JobControl is a utility which encapsulates a set of Map-Reduce jobs and their dependencies.

- +

Data Compression

Hadoop Map-Reduce provides facilities for the application-writer to specify compression for both intermediate map-outputs and the @@ -2023,7 +2028,7 @@ codecs for reasons of both performance (zlib) and non-availability of Java libraries (lzo). More details on their usage and availability are available here.

- +
Intermediate Outputs

Applications can control compression of intermediate map-outputs via the @@ -2044,7 +2049,7 @@ JobConf.setMapOutputCompressionType(SequenceFile.CompressionType) api.

- +
Job Outputs

Applications can control compression of job-outputs via the @@ -2064,7 +2069,7 @@ - +

Example: WordCount v2.0

Here is a more complete WordCount which uses many of the @@ -2074,7 +2079,7 @@ pseudo-distributed or fully-distributed Hadoop installation.

- +

Source Code

@@ -3284,7 +3289,7 @@
- +

Sample Runs

Sample text-files as input:

@@ -3452,7 +3457,7 @@

- +

Highlights

The second version of WordCount improves upon the previous one by using some features offered by the Map-Reduce framework: