Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 83C7BDB1B for ; Mon, 13 Aug 2012 14:51:06 +0000 (UTC) Received: (qmail 48924 invoked by uid 500); 13 Aug 2012 14:51:01 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 48590 invoked by uid 500); 13 Aug 2012 14:51:01 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 48583 invoked by uid 99); 13 Aug 2012 14:51:01 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 13 Aug 2012 14:51:01 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=FSL_RCVD_USER,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of harsh@cloudera.com designates 209.85.216.41 as permitted sender) Received: from [209.85.216.41] (HELO mail-qa0-f41.google.com) (209.85.216.41) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 13 Aug 2012 14:50:54 +0000 Received: by qafk30 with SMTP id k30so2662695qaf.14 for ; Mon, 13 Aug 2012 07:50:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:x-gm-message-state; bh=rWuY7XcE5u7IZKPRu923ldVXvFsOkpo0x5uECyyZSRY=; b=ki0OF0zhF5F2La01WUwp9tw0/NLU5VctPgLOLpa1UHZ4YVd39CmpDVMq02uLR2w45X Tbsy6GldwObfpTITJgGGBZXNS2yIMi9W3mvPrV6cCLSbe/LMVrfOD9bdWVn3+aH6iNtq 9w5gnGhDQjvlMuchSZBx4iJerRD16vInHs2RJz4eAbo6Lu5tRIsA8CVIHWsuxL00HQwd OGRIIDLkU0fBku/oiaBTwGYEdZ3KfK8lHpn3nRN7VvFIaJNvEs5SfLIyS9bNvjLiALPV zNqj+kvp5e6B7yya/KmK9LUHFb4FkVJygD6bdCj3hsOGACJ4uucEVCLz+IR2JsnD1oNZ tvpA== Received: by 10.60.20.74 with SMTP id l10mr19287886oee.19.1344869433886; Mon, 13 Aug 2012 07:50:33 -0700 (PDT) MIME-Version: 1.0 Received: by 10.76.11.168 with HTTP; Mon, 13 Aug 2012 07:50:13 -0700 (PDT) In-Reply-To: <1344832580.40717.YahooMailNeo@web195001.mail.sg3.yahoo.com> References: <1344832580.40717.YahooMailNeo@web195001.mail.sg3.yahoo.com> From: Harsh J Date: Mon, 13 Aug 2012 20:20:13 +0530 Message-ID: Subject: Re: Can not generate a result To: user@hadoop.apache.org, Astie Darmayantie Content-Type: text/plain; charset=ISO-8859-1 X-Gm-Message-State: ALoCoQlqfoMhqwcFLZ7gTCpNc53zg0dw/2CWEWkmVlyy2RvMNaKZRMUW8I9/WIAahWUwvcjoccFX Hi Astie, You can look at http://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo to find a solution on this one. Speaking w.r.t. first-timers, this frequently happens when you format the NameNode but forget to wipe the DataNode block directories at the same time. On Mon, Aug 13, 2012 at 10:06 AM, Astie Darmayantie wrote: > hi i am new to hadoop. > i already do the the precautionary measures like : configuring hadoop as > pseudo-distributed operations, namenode -format etc. before > running start-all.sh > > and i try to execute sample program WordCount by using : > ./bin/hadoop jar /home/astie/thesis/project_eclipse/WordCount.jar WordCount > /home/astie/thesis/project_eclipse/input/ > /home/astie/thesis/project_eclipse/output/ > > it doesn't generate the result and i got this in the log file : > > 2012-08-13 11:28:27,053 WARN org.apache.hadoop.hdfs.DFSClient: Error > Recovery for block null bad datanode[0] nodes == null > 2012-08-13 11:28:27,053 WARN org.apache.hadoop.hdfs.DFSClient: Could not get > block locations. Source file "/tmp/mapred/system/jobtracker.info" - > Aborting... > 2012-08-13 11:28:27,053 WARN org.apache.hadoop.mapred.JobTracker: Writing to > file hdfs://localhost:9000/tmp/mapred/system/jobtracker.info failed! > 2012-08-13 11:28:27,054 WARN org.apache.hadoop.mapred.JobTracker: FileSystem > is not ready yet! > 2012-08-13 11:28:27,059 WARN org.apache.hadoop.mapred.JobTracker: Failed to > initialize recovery manager. > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File > /tmp/mapred/system/jobtracker.info could only be replicated to 0 nodes, > instead of 1 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696) > at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:416) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382) > > at org.apache.hadoop.ipc.Client.call(Client.java:1070) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) > at $Proxy5.addBlock(Unknown Source) > at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:616) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) > at $Proxy5.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829) > > i am using openSuse and hadoop-1.0.3 also using eclipse to write the > program. > it is said that the node was null. yes, i am still running it with my > computer only. is it the problem? > can you tell me how to fix this? thank you -- Harsh J