Return-Path: Delivered-To: apmail-hadoop-general-archive@minotaur.apache.org Received: (qmail 99794 invoked from network); 15 Jun 2010 17:11:38 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 15 Jun 2010 17:11:38 -0000 Received: (qmail 52068 invoked by uid 500); 15 Jun 2010 17:11:37 -0000 Delivered-To: apmail-hadoop-general-archive@hadoop.apache.org Received: (qmail 51889 invoked by uid 500); 15 Jun 2010 17:11:36 -0000 Mailing-List: contact general-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: general@hadoop.apache.org Delivered-To: mailing list general@hadoop.apache.org Received: (qmail 51879 invoked by uid 99); 15 Jun 2010 17:11:36 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 15 Jun 2010 17:11:36 +0000 X-ASF-Spam-Status: No, hits=1.1 required=10.0 tests=AWL,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [192.5.164.99] (HELO camv02-relay2.casc.gd-ais.com) (192.5.164.99) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 15 Jun 2010 17:11:31 +0000 Received: from ([10.73.100.22]) by camv02-relay2.casc.gd-ais.com with SMTP id 5203374.36087912; Tue, 15 Jun 2010 10:11:09 -0700 Received: from MAPF01-MAIL01.ad.gd-ais.com ([166.16.220.104]) by camv02-fes01.ad.gd-ais.com with Microsoft SMTPSVC(6.0.3790.4675); Tue, 15 Jun 2010 10:11:09 -0700 X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Subject: RE: IPC connections failing Date: Tue, 15 Jun 2010 13:11:07 -0400 Message-ID: In-Reply-To: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: IPC connections failing Thread-Index: AcsMrY4jFCWs2PEdSUOSR7UUsnezQAAACltA References: From: "Kilbride, James P." To: X-OriginalArrivalTime: 15 Jun 2010 17:11:09.0072 (UTC) FILETIME=[BF6DF100:01CB0CAD] Forgot to add on mapred-site.xml: mapred.job.tracker hdfs://10.28.208.118:54311 The host and port that the MapReduce job tracker runs at. If "local", then the jobs are run in-process as a single map and reduce task. true -----Original Message----- From: Kilbride, James P. [mailto:James.Kilbride@gd-ais.com]=20 Sent: Tuesday, June 15, 2010 1:10 PM To: general@hadoop.apache.org Subject: IPC connections failing I've tried to move my system from pseudo distributed to full distributed(with it being the first node in the full cluster I'll be setting up) but the components don't seem to be talking to each other and I find logs filled with ipc.Client connection failures.=20 Here's the basic log info: DataNode: 2010-06-15 12:42:42,824 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:=20 /************************************************************ STARTUP_MSG: Starting DataNode STARTUP_MSG: host =3D centoshadoop.soa.gd-ais.com/10.28.208.118 STARTUP_MSG: args =3D [] STARTUP_MSG: version =3D 0.20.2 STARTUP_MSG: build =3D https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010 ************************************************************/ 2010-06-15 12:42:49,335 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 0 time(s). 2010-06-15 12:42:50,343 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 1 time(s). 2010-06-15 12:42:51,354 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 2 time(s). 2010-06-15 12:42:53,641 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 3 time(s). 2010-06-15 12:42:54,733 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 4 time(s). 2010-06-15 12:42:55,737 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 5 time(s). 2010-06-15 12:42:56,824 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 6 time(s). 2010-06-15 12:42:57,827 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 7 time(s). 2010-06-15 12:42:58,831 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 8 time(s). 2010-06-15 12:42:59,833 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 9 time(s). 2010-06-15 12:42:59,836 INFO org.apache.hadoop.ipc.RPC: Server at /10.28.208.118:54310 not available yet, Zzzzz... 2010-06-15 12:43:01,841 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 0 time(s). 2010-06-15 12:43:02,844 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 1 time(s). 2010-06-15 12:43:03,846 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 2 time(s). 2010-06-15 12:43:04,849 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 3 time(s). 2010-06-15 12:43:05,852 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 4 time(s). 2010-06-15 12:43:06,855 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 5 time(s). 2010-06-15 12:43:07,858 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 6 time(s). 2010-06-15 12:43:08,883 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 7 time(s). 2010-06-15 12:43:09,970 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 8 time(s). 2010-06-15 12:43:10,973 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 9 time(s). 2010-06-15 12:43:10,973 INFO org.apache.hadoop.ipc.RPC: Server at /10.28.208.118:54310 not available yet, Zzzzz... JobTracker: 2010-06-15 12:42:52,836 INFO org.apache.hadoop.mapred.JobTracker: STARTUP_MSG:=20 /************************************************************ STARTUP_MSG: Starting JobTracker STARTUP_MSG: host =3D centoshadoop.soa.gd-ais.com/10.28.208.118 STARTUP_MSG: args =3D [] STARTUP_MSG: version =3D 0.20.2 STARTUP_MSG: build =3D https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010 ************************************************************/ 2010-06-15 12:42:53,350 INFO org.apache.hadoop.mapred.JobTracker: Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1) 2010-06-15 12:42:53,815 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=3DJobTracker, port=3D54311 2010-06-15 12:42:53,986 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2010-06-15 12:42:55,381 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50030 2010-06-15 12:42:55,400 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50030 webServer.getConnectors()[0].getLocalPort() returned 50030 2010-06-15 12:42:55,400 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50030 2010-06-15 12:42:55,402 INFO org.mortbay.log: jetty-6.1.14 2010-06-15 12:42:56,802 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50030 2010-06-15 12:42:56,805 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=3DJobTracker, sessionId=3D 2010-06-15 12:42:56,807 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 54311 2010-06-15 12:42:56,807 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030 2010-06-15 12:42:58,076 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 0 time(s). 2010-06-15 12:42:59,080 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 1 time(s). 2010-06-15 12:43:00,083 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 2 time(s). 2010-06-15 12:43:01,086 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 3 time(s). 2010-06-15 12:43:02,089 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 4 time(s). 2010-06-15 12:43:03,091 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 5 time(s). 2010-06-15 12:43:04,095 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 6 time(s). 2010-06-15 12:43:05,174 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 7 time(s). 2010-06-15 12:43:06,177 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 8 time(s). 2010-06-15 12:43:07,180 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 9 time(s). 2010-06-15 12:43:07,185 INFO org.apache.hadoop.mapred.JobTracker: problem cleaning system directory: null java.net.ConnectException: Call to /10.28.208.118:54310 failed on connection exception: java.net.ConnectException: Connection refused at org.apache.hadoop.ipc.Client.wrapException(Client.java:767) at org.apache.hadoop.ipc.Client.call(Client.java:743) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) at $Proxy4.getProtocolVersion(Unknown Source) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359) at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:207) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:170) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileS ystem.java:82) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:1665) at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:183) at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:175) at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3702) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.ja va:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304) at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176) at org.apache.hadoop.ipc.Client.getConnection(Client.java:860) at org.apache.hadoop.ipc.Client.call(Client.java:720) ... 16 more NameNode: 2010-06-15 12:42:38,714 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:=20 /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host =3D centoshadoop.soa.gd-ais.com/10.28.208.118 STARTUP_MSG: args =3D [] STARTUP_MSG: version =3D 0.20.2 STARTUP_MSG: build =3D https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010 ************************************************************/ 2010-06-15 12:42:38,922 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=3DNameNode, port=3D54310 2010-06-15 12:42:38,932 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: centoshadoop/10.28.208.118:54310 2010-06-15 12:42:38,938 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=3DNameNode, sessionId=3Dnull 2010-06-15 12:42:38,941 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext 2010-06-15 12:42:39,140 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=3Dhadoop,users,hadoop 2010-06-15 12:42:39,141 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=3Dsupergroup 2010-06-15 12:42:39,141 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=3Dtrue 2010-06-15 12:42:39,156 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext 2010-06-15 12:42:39,161 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean 2010-06-15 12:42:39,381 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files =3D 28 2010-06-15 12:42:39,404 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction =3D 0 2010-06-15 12:42:39,404 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 2780 loaded in 0 seconds. 2010-06-15 12:42:39,429 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.NumberFormatException: For input string: "" at java.lang.NumberFormatException.forInputString(NumberFormatException.jav a:48) at java.lang.Long.parseLong(Long.java:431) at java.lang.Long.parseLong(Long.java:468) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.readLong(FSEditLog.java :1273) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.j ava:670) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java: 992) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java: 812) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSI mage.java:364) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirecto ry.java:87) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesys tem.java:311) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem. java:292) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java :201) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:279 ) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode. java:956) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965) SecondaryNameNode: 2010-06-15 12:42:53,227 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: STARTUP_MSG:=20 /************************************************************ STARTUP_MSG: Starting SecondaryNameNode STARTUP_MSG: host =3D centoshadoop.soa.gd-ais.com/10.28.208.118 STARTUP_MSG: args =3D [] STARTUP_MSG: version =3D 0.20.2 STARTUP_MSG: build =3D https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010 ************************************************************/ 2010-06-15 12:42:53,442 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=3DSecondaryNameNode, sessionId=3Dnull 2010-06-15 12:42:55,651 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 0 time(s). 2010-06-15 12:42:56,659 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 1 time(s). 2010-06-15 12:42:57,662 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 2 time(s). 2010-06-15 12:42:58,665 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 3 time(s). 2010-06-15 12:42:59,668 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 4 time(s). 2010-06-15 12:43:00,671 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 5 time(s). 2010-06-15 12:43:01,674 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 6 time(s). 2010-06-15 12:43:02,677 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 7 time(s). 2010-06-15 12:43:03,679 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 8 time(s). 2010-06-15 12:43:04,682 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 9 time(s). 2010-06-15 12:43:04,685 INFO org.apache.hadoop.ipc.RPC: Server at /10.28.208.118:54310 not available yet, Zzzzz... 2010-06-15 12:43:06,691 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 0 time(s). 2010-06-15 12:43:07,694 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 1 time(s). 2010-06-15 12:43:08,697 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 2 time(s). 2010-06-15 12:43:09,700 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 3 time(s). 2010-06-15 12:43:10,703 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 4 time(s). 2010-06-15 12:43:11,706 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 5 time(s). 2010-06-15 12:43:12,709 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 6 time(s). 2010-06-15 12:43:13,712 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 7 time(s). 2010-06-15 12:43:14,714 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 8 time(s). 2010-06-15 12:43:15,717 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /10.28.208.118:54310. Already tried 9 time(s). 2010-06-15 12:43:15,718 INFO org.apache.hadoop.ipc.RPC: Server at /10.28.208.118:54310 not available yet, Zzzzz... TaskTracker: 2010-06-15 12:42:57,270 INFO org.apache.hadoop.mapred.TaskTracker: STARTUP_MSG:=20 /************************************************************ STARTUP_MSG: Starting TaskTracker STARTUP_MSG: host =3D centoshadoop.soa.gd-ais.com/10.28.208.118 STARTUP_MSG: args =3D [] STARTUP_MSG: version =3D 0.20.2 STARTUP_MSG: build =3D https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010 ************************************************************/ 2010-06-15 12:42:57,854 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2010-06-15 12:42:58,328 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50060 2010-06-15 12:42:58,344 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50060 webServer.getConnectors()[0].getLocalPort() returned 50060 2010-06-15 12:42:58,344 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50060 2010-06-15 12:42:58,345 INFO org.mortbay.log: jetty-6.1.14 2010-06-15 12:42:58,852 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50060 2010-06-15 12:42:58,860 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=3DTaskTracker, sessionId=3D 2010-06-15 12:42:58,882 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=3DTaskTracker, port=3D50758 2010-06-15 12:42:58,960 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2010-06-15 12:42:58,962 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50758: starting 2010-06-15 12:42:58,964 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50758: starting 2010-06-15 12:42:58,964 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50758: starting 2010-06-15 12:42:58,964 INFO org.apache.hadoop.mapred.TaskTracker: TaskTracker up at: localhost.localdomain/127.0.0.1:50758 2010-06-15 12:42:58,964 INFO org.apache.hadoop.mapred.TaskTracker: Starting tracker tracker_centoshadoop:localhost.localdomain/127.0.0.1:50758 2010-06-15 12:42:58,969 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 50758: starting 2010-06-15 12:42:58,969 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50758: starting The config files are: core-site.xml: hadoop.tmp.dir /hadoop-0.20.2/tmp/dir/hadoop-${user.name} A base for the other temporary directories. true fs.default.name hdfs://10.28.208.118:54310 The name of the default file system. A URI whose scheme and auth ority determine the filesystem implementation. The uri's scheme determines the c onfig property (fs.SCHEME.impl) name the FileSystem implementation class. The ur i's authority is used to determine the host, port, etc for a filesystem true hadoop-env.sh # Set Hadoop-specific environment variables here. # The only required environment variable is JAVA_HOME. All others are # optional. When running a distributed configuration it is best to # set JAVA_HOME in this file, so that it is correctly defined on # remote nodes. # The java implementation to use. Required. export JAVA_HOME=3D/usr/java/default # The hadoop home directory export HADOOP_HOME=3D/hadoop-0.20.2 # Extra Java CLASSPATH elements. Optional. export HADOOP_CLASSPATH=3D/hbase-0.20.4/lib/zookeeper-3.2.2.jar:/habase-0.20.4/h= b ase-0.20.4-test.jar:/hbase-0.20.4/hbase-0.20.4.jar:/hbase-0.20.4/conf # The maximum amount of heap to use, in MB. Default is 1000. # export HADOOP_HEAPSIZE=3D2000 # Extra Java runtime options. Empty by default. export HADOOP_OPTS=3D-Djava.net.preferIPv4Stack=3Dtrue # Command specific options appended to HADOOP_OPTS when specified export HADOOP_NAMENODE_OPTS=3D"-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS" export HADOOP_SECONDARYNAMENODE_OPTS=3D"-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS" export HADOOP_DATANODE_OPTS=3D"-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS" export HADOOP_BALANCER_OPTS=3D"-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS" export HADOOP_JOBTRACKER_OPTS=3D"-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS" # export HADOOP_TASKTRACKER_OPTS=3D # The following applies to multiple commands (fs, dfs, fsck, distcp etc) # export HADOOP_CLIENT_OPTS # Extra ssh options. Empty by default. # export HADOOP_SSH_OPTS=3D"-o ConnectTimeout=3D1 -o SendEnv=3DHADOOP_CONF_DIR" # Where log files are stored. $HADOOP_HOME/logs by default. # export HADOOP_LOG_DIR=3D${HADOOP_HOME}/logs # File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default. # export HADOOP_SLAVES=3D${HADOOP_HOME}/conf/slaves # host:path where hadoop code should be rsync'd from. Unset by default. # export HADOOP_MASTER=3Dmaster:/home/$USER/src/hadoop # Seconds to sleep between slave commands. Unset by default. This # can be useful in large clusters, where, e.g., slave rsyncs can # otherwise arrive faster than the master can service them. # export HADOOP_SLAVE_SLEEP=3D0.1 # The directory where pid files are stored. /tmp by default. # export HADOOP_PID_DIR=3D/var/hadoop/pids # A string representing this instance of hadoop. $USER by default. # export HADOOP_IDENT_STRING=3D$USER # The scheduling priority for daemon processes. See 'man nice'. # export HADOOP_NICENESS=3D10 hdfs-site.xml dfs.replication 22 Default block replication. The actual number of replications can be specified when the file is created. Default is used if not specified at creation time. true dfs.data.dir /hadoop-0.20.2/hadoopDataStore/data DFS Data Directory true masters: 10.28.208.118 slaves: 10.28.208.118 (And because I know it'll be asked) hosts: # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost=20 ::1 localhost6.localdomain6 localhost6=20 10.28.208.118 centoshadoop centoshadoop.soa.gd-ais.com Why is this failing to stand up? I'd like to get this one node working before I stand up any other data nodes.(Firewall, btw, is turned off) James Kilbride