Return-Path: Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: (qmail 50322 invoked from network); 20 Mar 2010 18:33:37 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 20 Mar 2010 18:33:37 -0000 Received: (qmail 677 invoked by uid 500); 20 Mar 2010 16:44:15 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 637 invoked by uid 500); 20 Mar 2010 16:44:15 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 629 invoked by uid 99); 20 Mar 2010 16:44:15 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 20 Mar 2010 16:44:15 +0000 X-ASF-Spam-Status: No, hits=0.5 required=10.0 tests=AWL,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [130.15.241.183] (HELO dolly.its.queensu.ca) (130.15.241.183) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 20 Mar 2010 16:44:07 +0000 Received: from HRFAN74180 ([130.15.109.238]) by mta01.its.queensu.ca (Sun Java System Messaging Server 6.2-7.05 (built Sep 5 2006)) with ESMTP id <0KZL00CZHAGT3F30@mta01.its.queensu.ca> for common-user@hadoop.apache.org; Sat, 20 Mar 2010 12:43:46 -0400 (EDT) Date: Sat, 20 Mar 2010 12:43:16 -0400 From: Katie legere Subject: Bad connection to FS. command aborted. To: common-user@hadoop.apache.org Message-id: <003101cac84c$70c48f00$524dad00$@ca> MIME-version: 1.0 X-Mailer: Microsoft Office Outlook 12.0 Content-type: multipart/related; boundary="Boundary_(ID_Ix6IuzAerP4mACNXwv54yw)" Content-language: en-us Thread-index: AcrITHCLW1XrRpuYTd6U4oZxtZcxhw== --Boundary_(ID_Ix6IuzAerP4mACNXwv54yw) Content-type: multipart/alternative; boundary="Boundary_(ID_pSf41Z0LsbPVtzHebZK7pw)" --Boundary_(ID_pSf41Z0LsbPVtzHebZK7pw) Content-type: text/plain; charset=us-ascii Content-transfer-encoding: 7BIT I'm getting this error when I try to copy files to the dfs using this command: hadoop@10:/home/ubuntu/hadoop$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg gutenberg I tried this to see what might be the problem : hadoop@10:/home/ubuntu/hadoop$ bin/hadoop namenode And got this. 10/03/20 16:24:55 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = 10/0.0.0.10 STARTUP_MSG: args = [] STARTUP_MSG: version = 0.20.1 STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/tags/releas e-0.20.1-rc1 -r 810220; compiled by 'oom' on Tue Sep 1 20:55:56 UTC 2009 ************************************************************/ 10/03/20 16:25:00 INFO metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=54310 10/03/20 16:25:00 INFO namenode.NameNode: Namenode up at: localhost/127.0.0.1:54310 10/03/20 16:25:00 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null 10/03/20 16:25:10 INFO metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext 10/03/20 16:25:10 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop 10/03/20 16:25:10 INFO namenode.FSNamesystem: supergroup=supergroup 10/03/20 16:25:10 INFO namenode.FSNamesystem: isPermissionEnabled=true 10/03/20 16:25:15 INFO metrics.FSNamesystemMetrics: Initializing FSNamesystemMet rics using context object:org.apache.hadoop.metrics.spi.NullContext 10/03/20 16:25:15 INFO namenode.FSNamesystem: Registered FSNamesystemStatusMBean 10/03/20 16:25:20 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name does not exist. 10/03/20 16:25:20 ERROR namenode.FSNamesystem: FSNamesystem initialization failed. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-hadoop/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage .java:290) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.j ava:87) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem. java:311) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java :292) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201 ) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:279) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java :956) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965) 10/03/20 16:25:20 INFO ipc.Server: Stopping server on 54310 10/03/20 16:25:20 ERROR namenode.NameNode: org.apache.hadoop.hdfs.server.common. InconsistentFSStateException: Directory /tmp/hadoop-hadoop/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage .java:290) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.j ava:87) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem. java:311) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java :292) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201 ) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:279) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java :956) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965) 10/03/20 16:25:20 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at 10/0.0.0.10 ************************************************************/ I tried running: hadoop@10:/home/ubuntu/hadoop$ bin/start-dfs.sh starting namenode, logging to /home/ubuntu/hadoop/bin/../logs/hadoop-hadoop-namenode-10.out localhost: starting datanode, logging to /home/ubuntu/hadoop/bin/../logs/hadoop-hadoop-datanode-10.out localhost: starting secondarynamenode, logging to /home/ubuntu/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-10.out Which seems good but I still get the same error. Katie Legere | Senior Programmer/Analyst | Department of Human Resources 613-533-6000 x74180 | Queen's University sig This e-mail is intended only for the recipient(s) to whom it is addressed and may contain information that is confidential or privileged. If you are not the intended recipient, any disclosure or unauthorized use of this e-mail or the information contained herein or attached hereto is strictly prohibited and may be unlawful. If you have received this e-mail in error, please notify the sender immediately and delete this e-mail without reading, printing, copying or forwarding it to anyone. Thank you. --Boundary_(ID_pSf41Z0LsbPVtzHebZK7pw) Content-type: text/html; charset=us-ascii Content-transfer-encoding: 7BIT

I’m getting this error when I try to copy files to the dfs using this command:   hadoop@10:/home/ubuntu/hadoop$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg gutenberg

 

I tried this to see what might be the problem : hadoop@10:/home/ubuntu/hadoop$ bin/hadoop namenode

 

And got this…

10/03/20 16:24:55 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:   host = 10/0.0.0.10

STARTUP_MSG:   args = []

STARTUP_MSG:   version = 0.20.1

STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/tags/releas

e-0.20.1-rc1 -r 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009

************************************************************/

10/03/20 16:25:00 INFO metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=54310

10/03/20 16:25:00 INFO namenode.NameNode: Namenode up at: localhost/127.0.0.1:54310

10/03/20 16:25:00 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null

10/03/20 16:25:10 INFO metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext

10/03/20 16:25:10 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop

10/03/20 16:25:10 INFO namenode.FSNamesystem: supergroup=supergroup

10/03/20 16:25:10 INFO namenode.FSNamesystem: isPermissionEnabled=true

10/03/20 16:25:15 INFO metrics.FSNamesystemMetrics: Initializing FSNamesystemMet

rics using context object:org.apache.hadoop.metrics.spi.NullContext

10/03/20 16:25:15 INFO namenode.FSNamesystem: Registered FSNamesystemStatusMBean

10/03/20 16:25:20 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name does not exist.

10/03/20 16:25:20 ERROR namenode.FSNamesystem: FSNamesystem initialization failed.

org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-hadoop/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.       

at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)

        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)

        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)

        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

10/03/20 16:25:20 INFO ipc.Server: Stopping server on 54310

10/03/20 16:25:20 ERROR namenode.NameNode: org.apache.hadoop.hdfs.server.common.

InconsistentFSStateException: Directory /tmp/hadoop-hadoop/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)

        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)

        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)

        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

 

10/03/20 16:25:20 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at 10/0.0.0.10

************************************************************/

 

I tried running: hadoop@10:/home/ubuntu/hadoop$ bin/start-dfs.sh

 

starting namenode, logging to /home/ubuntu/hadoop/bin/../logs/hadoop-hadoop-namenode-10.out

localhost: starting datanode, logging to /home/ubuntu/hadoop/bin/../logs/hadoop-hadoop-datanode-10.out

localhost: starting secondarynamenode, logging to /home/ubuntu/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-10.out

 

Which seems good but I still get the same error…

 

Katie Legere | Senior Programmer/Analyst | Department of Human Resources

613-533-6000    x74180 | Queen's University

sig

This e-mail is intended only for the recipient(s) to whom it is addressed and may contain information that is confidential or privileged. If you are not the intended recipient, any disclosure or unauthorized use of this e-mail or the information contained herein or attached hereto is strictly prohibited and may be unlawful. If you have received this e-mail in error, please notify the sender immediately and delete this e-mail without reading, printing, copying or forwarding it to anyone.  Thank you.

 

--Boundary_(ID_pSf41Z0LsbPVtzHebZK7pw)-- --Boundary_(ID_Ix6IuzAerP4mACNXwv54yw)--