Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 13A4D6524 for ; Fri, 27 May 2011 01:54:24 +0000 (UTC) Received: (qmail 47227 invoked by uid 500); 27 May 2011 01:54:21 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 47152 invoked by uid 500); 27 May 2011 01:54:21 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Delivered-To: moderator for common-user@hadoop.apache.org Received: (qmail 70818 invoked by uid 99); 27 May 2011 00:02:31 -0000 X-ASF-Spam-Status: No, hits=-2.3 required=5.0 tests=RCVD_IN_DNSWL_MED,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of richard.xu@citi.com designates 216.82.250.67 as permitted sender) X-VirusChecked: Checked X-Env-Sender: richard.xu@citi.com X-Msg-Ref: server-14.tower-109.messagelabs.com!1306454519!24057567!1 X-StarScan-Version: 6.2.9; banners=-,-,- X-Originating-IP: [199.67.141.124] From: "Xu, Richard " To: "'common-user@hadoop.apache.org'" CC: "'yiqun.xu@gmail.com'" Date: Thu, 26 May 2011 19:01:37 -0500 Subject: Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 cluster Thread-Topic: Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 cluster Thread-Index: AcwcAT/IvHseXHYDSUiU20aIeypnmw== Message-ID: <7C766D095FCA124AAD108ECB67BCF51E0831AB3D43@exgtmb14.nam.nsroot.net> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-WiganSS: 01000000010019exlnjiht02.lac.nsroot.net ID0044<7C766D095FCA124AAD108ECB67BCF51E0831AB3D43@exgtmb14.nam.nsroot.net> X-Virus-Checked: Checked by ClamAV on apache.org Hi Folks, We try to get hbase and hadoop running on clusters, take 2 Solaris servers = for now. Because of the incompatibility issue between hbase and hadoop, we have to s= tick with hadoop 0.20.2-append release. It is very straight forward to make hadoop-0.20.203 running, but stuck for = several days with hadoop-0.20.2, even the official release, not the append = version. 1. Once try to run start-mapred.sh(hadoop-daemon.sh --config $HADOOP_CONF_D= IR start jobtracker), following errors shown in namenode and jobtracker log= s: 2011-05-26 12:30:29,169 WARN org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: Not able to place enough replicas, still in need of 1 2011-05-26 12:30:29,175 INFO org.apache.hadoop.ipc.Server: IPC Server handl= er 4 on 9000, call addBlock(/tmp/hadoop-cfadm/mapred/system/jobtracker.info= , DFSCl ient_2146408809) from 169.193.181.212:55334: error: java.io.IOException: Fi= le /tmp/hadoop-cfadm/mapred/system/jobtracker.info could only be replicated= to 0 n odes, instead of 1 java.io.IOException: File /tmp/hadoop-cfadm/mapred/system/jobtracker.info c= ould only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditiona= lBlock(FSNamesystem.java:1271) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNod= e.java:422) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessor= Impl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethod= AccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) 2. Also, Configured Capacity is 0, cannot put any file to HDFS. 3. in datanode server, no error in logs, but tasktracker logs has the follo= wing suspicious thing: 2011-05-25 23:36:10,839 INFO org.apache.hadoop.ipc.Server: IPC Server Respo= nder: starting 2011-05-25 23:36:10,839 INFO org.apache.hadoop.ipc.Server: IPC Server liste= ner on 41904: starting 2011-05-25 23:36:10,852 INFO org.apache.hadoop.ipc.Server: IPC Server handl= er 0 on 41904: starting 2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server handl= er 1 on 41904: starting 2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server handl= er 2 on 41904: starting 2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server handl= er 3 on 41904: starting ..... 2011-05-25 23:36:10,855 INFO org.apache.hadoop.ipc.Server: IPC Server handl= er 63 on 41904: starting 2011-05-25 23:36:10,950 INFO org.apache.hadoop.mapred.TaskTracker: TaskTrac= ker up at: localhost/127.0.0.1:41904 2011-05-25 23:36:10,950 INFO org.apache.hadoop.mapred.TaskTracker: Starting= tracker tracker_loanps3d:localhost/127.0.0.1:41904 I have tried all suggestions found so far, including 1) remove hadoop-name and hadoop-data folders and reformat namenode; 2) clean up all temp files/folders under /tmp; But nothing works. Your help is greatly appreciated. Thanks, RX