Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E5CE1D2DA for ; Sat, 29 Dec 2012 06:28:02 +0000 (UTC) Received: (qmail 58240 invoked by uid 500); 29 Dec 2012 06:27:58 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 57887 invoked by uid 500); 29 Dec 2012 06:27:57 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 57856 invoked by uid 99); 29 Dec 2012 06:27:56 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 29 Dec 2012 06:27:56 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of harsh@cloudera.com designates 209.85.223.177 as permitted sender) Received: from [209.85.223.177] (HELO mail-ie0-f177.google.com) (209.85.223.177) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 29 Dec 2012 06:27:53 +0000 Received: by mail-ie0-f177.google.com with SMTP id k13so13533166iea.22 for ; Fri, 28 Dec 2012 22:27:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:content-transfer-encoding:x-gm-message-state; bh=sCrnoB01kbFI5dvXZVswRcrTSdc98zVTyevlfcDxl4s=; b=kOJaspiQOrWp2DvIptaYCX8dXsvM7on6TJULWmj+ubSKA5LHSHMSiQaDseeibu0X5y HnuYSjpHFGOKdGqBidCp4gi0o2lgudjAHF5NBY/t8dHqJSHHTqCUjTn4iJvM5+GN75zZ 4ch0+sOw5OzS1+pNrUnp4koql5c1Oyd8/lNx6gMZJ2i2FMX+E5lRzI7E1y5wCbmNWBbS Z7ODIL/rJP6xnnmp/ei1sMlBD5yy2yEM8ErhR3FO/TCvSGaVURuvw7UfmcSHQFAaR9J/ vkMyXw3wGNiufCB9+oGInk58Ss67jmYhXgx1vynqfjp6ocEEPq8xsBewLoMAOgr0CLEc 3gHg== Received: by 10.50.151.238 with SMTP id ut14mr30903867igb.58.1356762452287; Fri, 28 Dec 2012 22:27:32 -0800 (PST) MIME-Version: 1.0 Received: by 10.64.6.129 with HTTP; Fri, 28 Dec 2012 22:27:12 -0800 (PST) In-Reply-To: References: From: Harsh J Date: Sat, 29 Dec 2012 11:57:12 +0530 Message-ID: Subject: Re: how to start hadoop 1.0.4 backup node? To: "" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Gm-Message-State: ALoCoQk7UHuI+mBaaex4IGAQvi795PFRJBezN8bYo4hFsI7hHFZG8JV6ukn/+j9bZFPft27TqUxx X-Virus-Checked: Checked by ClamAV on apache.org Hi, I'd already addressed this via https://issues.apache.org/jira/browse/HADOOP-7297 and it isn't present anymore in 1.1.x+ docs. On Sat, Dec 29, 2012 at 11:42 AM, =E5=91=A8=E6=A2=A6=E6=83=B3 wrote: > > ok, retported bug as HDFS-4348. > > thanks. > Andy > > 2012/12/29 Suresh Srinivas >> >> This is a documentation bug. Backup node is not available in 1.x release= . >> It is available in 0.23 and 2.x releases. Please create a bug to point 1= .x >> documents to the right set of docs. >> >> Sent from a mobile device >> >> On Dec 28, 2012, at 7:13 PM, =E5=91=A8=E6=A2=A6=E6=83=B3 wrote: >> >> http://hadoop.apache.org/docs/r1.0.4/hdfs_user_guide.html#Backup+Node >> >> the document write: >> The Backup node is configured in the same manner as the Checkpoint node. >> It is started with bin/hdfs namenode -checkpoint >> >> but hadoop 1.0.4 there is no hdfs file: >> [zhouhh@Hadoop48 hadoop-1.0.4]$ ls bin >> hadoop hadoop-daemons.sh start-all.sh >> start-jobhistoryserver.sh stop-balancer.sh stop-mapred.sh >> hadoop-config.sh rcc start-balancer.sh start-mapred.sh >> stop-dfs.sh task-controller >> hadoop-daemon.sh slaves.sh start-dfs.sh stop-all.sh >> stop-jobhistoryserver.sh >> >> >> [zhouhh@Hadoop48 hadoop-1.0.4]$ find . -name hdfs >> ./webapps/hdfs >> ./src/webapps/hdfs >> ./src/test/org/apache/hadoop/hdfs >> ./src/test/system/aop/org/apache/hadoop/hdfs >> ./src/test/system/java/org/apache/hadoop/hdfs >> ./src/hdfs >> ./src/hdfs/org/apache/hadoop/hdfs >> >> >> thanks! >> Andy > > --=20 Harsh J