Return-Path: X-Original-To: apmail-hadoop-common-commits-archive@www.apache.org Delivered-To: apmail-hadoop-common-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 23D4D11936 for ; Thu, 22 May 2014 20:42:10 +0000 (UTC) Received: (qmail 36505 invoked by uid 500); 22 May 2014 20:42:10 -0000 Delivered-To: apmail-hadoop-common-commits-archive@hadoop.apache.org Received: (qmail 36434 invoked by uid 500); 22 May 2014 20:42:09 -0000 Mailing-List: contact common-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-dev@hadoop.apache.org Delivered-To: mailing list common-commits@hadoop.apache.org Received: (qmail 36427 invoked by uid 99); 22 May 2014 20:42:09 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 22 May 2014 20:42:09 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 22 May 2014 20:42:09 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id 28317238889B; Thu, 22 May 2014 20:41:45 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1596965 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: CHANGES.txt src/site/apt/SingleNodeSetup.apt.vm Date: Thu, 22 May 2014 20:41:45 -0000 To: common-commits@hadoop.apache.org From: arp@apache.org X-Mailer: svnmailer-1.0.9 Message-Id: <20140522204145.28317238889B@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: arp Date: Thu May 22 20:41:44 2014 New Revision: 1596965 URL: http://svn.apache.org/r1596965 Log: HADOOP-10618: Merging r1596964 from trunk to branch-2. Modified: hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/site/apt/SingleNodeSetup.apt.vm Modified: hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1596965&r1=1596964&r2=1596965&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt (original) +++ hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt Thu May 22 20:41:44 2014 @@ -54,6 +54,9 @@ Release 2.5.0 - UNRELEASED HADOOP-10614. CBZip2InputStream is not threadsafe (Xiangrui Meng via Sandy Ryza) + HADOOP-10618. Remove SingleNodeSetup.apt.vm. (Akira Ajisaka via + Arpit Agarwal) + OPTIMIZATIONS BUG FIXES Modified: hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/site/apt/SingleNodeSetup.apt.vm URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/site/apt/SingleNodeSetup.apt.vm?rev=1596965&r1=1596964&r2=1596965&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/site/apt/SingleNodeSetup.apt.vm (original) +++ hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/site/apt/SingleNodeSetup.apt.vm Thu May 22 20:41:44 2014 @@ -18,210 +18,7 @@ Single Node Setup -%{toc|section=1|fromDepth=0} + This page will be removed in the next major release. -* Purpose - - This document describes how to set up and configure a single-node - Hadoop installation so that you can quickly perform simple operations - using Hadoop MapReduce and the Hadoop Distributed File System (HDFS). - -* Prerequisites - -** Supported Platforms - - * GNU/Linux is supported as a development and production platform. - Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes. - - * Windows is also a supported platform. - -** Required Software - - Required software for Linux and Windows include: - - [[1]] Java^TM 1.6.x, preferably from Sun, must be installed. - - [[2]] ssh must be installed and sshd must be running to use the Hadoop - scripts that manage remote Hadoop daemons. - -** Installing Software - - If your cluster doesn't have the requisite software you will need to - install it. - - For example on Ubuntu Linux: - ----- - $ sudo apt-get install ssh - $ sudo apt-get install rsync ----- - -* Download - - To get a Hadoop distribution, download a recent stable release from one - of the Apache Download Mirrors. - -* Prepare to Start the Hadoop Cluster - - Unpack the downloaded Hadoop distribution. In the distribution, edit - the file <<>> to define at least <<>> to be the root - of your Java installation. - - Try the following command: - ----- - $ bin/hadoop ----- - - This will display the usage documentation for the hadoop script. - - Now you are ready to start your Hadoop cluster in one of the three - supported modes: - - * Local (Standalone) Mode - - * Pseudo-Distributed Mode - - * Fully-Distributed Mode - -* Standalone Operation - - By default, Hadoop is configured to run in a non-distributed mode, as a - single Java process. This is useful for debugging. - - The following example copies the unpacked conf directory to use as - input and then finds and displays every match of the given regular - expression. Output is written to the given output directory. - ----- - $ mkdir input - $ cp conf/*.xml input - $ bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+' - $ cat output/* ---- - -* Pseudo-Distributed Operation - - Hadoop can also be run on a single-node in a pseudo-distributed mode - where each Hadoop daemon runs in a separate Java process. - -** Configuration - - Use the following: - - conf/core-site.xml: - ----- - - - fs.defaultFS - hdfs://localhost:9000 - - ----- - - conf/hdfs-site.xml: - ----- - - - dfs.replication - 1 - - ----- - - conf/mapred-site.xml: - ----- - - - mapred.job.tracker - localhost:9001 - - ----- - -** Setup passphraseless ssh - - Now check that you can ssh to the localhost without a passphrase: - ----- - $ ssh localhost ----- - - If you cannot ssh to localhost without a passphrase, execute the - following commands: - ----- - $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa - $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys ----- - -** Execution - - Format a new distributed-filesystem: - ----- - $ bin/hadoop namenode -format ----- - - Start the hadoop daemons: - ----- - $ bin/start-all.sh ----- - - The hadoop daemon log output is written to the <<<${HADOOP_LOG_DIR}>>> - directory (defaults to <<<${HADOOP_PREFIX}/logs>>>). - - Browse the web interface for the NameNode and the JobTracker; by - default they are available at: - - * NameNode - <<>> - - * JobTracker - <<>> - - Copy the input files into the distributed filesystem: - ----- - $ bin/hadoop fs -put conf input ----- - - Run some of the examples provided: - ----- - $ bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+' ----- - - Examine the output files: - - Copy the output files from the distributed filesystem to the local - filesytem and examine them: - ----- - $ bin/hadoop fs -get output output - $ cat output/* ----- - - or - - View the output files on the distributed filesystem: - ----- - $ bin/hadoop fs -cat output/* ----- - - When you're done, stop the daemons with: - ----- - $ bin/stop-all.sh ----- - -* Fully-Distributed Operation - - For information on setting up fully-distributed, non-trivial clusters - see {{{./ClusterSetup.html}Cluster Setup}}. - - Java and JNI are trademarks or registered trademarks of Sun - Microsystems, Inc. in the United States and other countries. + See {{{./SingleCluster.html}Single Cluster Setup}} to set up and configure a + single-node Hadoop installation.