hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Bieniosek <micb...@microsoft.com>
Subject RE: "This is not a DFS" error starting secondarynamenode when using S3FileSystem
Date Tue, 02 Mar 2010 20:41:22 GMT
What are you trying to do?  If you're using S3, you shouldn't need to set up the hadoop filesystem
(datanode, namenode, secondarynamenode) because S3 is your file system. 

Here's an old wiki: http://wiki.apache.org/hadoop/AmazonS3

-Michael
	
-----Original Message-----
From: Brian Long [mailto:brian@dotspots.com] 
Sent: Tuesday, March 02, 2010 10:44 AM
To: common-user@hadoop.apache.org
Subject: "This is not a DFS" error starting secondarynamenode when using S3FileSystem

Hello,

Using Hadoop 0.20.0, I'm setting up a new cluster which uses S3FileSystem. I already have
the namenode, jobtracker, and tasktrackers running fine - distcp'd some data in to make sure.

However, when I try to start the secondarynamenode (via "bin/hadoop-daemon.sh --config /hadoop-0.20.0/conf
start secondarynamenode"), I get the following:

Exception in thread "main" java.io.IOException: This is not a DFS
        at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getInfoServer(SecondaryNameNode.java:289)
        at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:139)
        at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:115)
        at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)

Looking at the code in SecondaryNameNode.java:289

    if (!"hdfs".equals(fsName.getScheme())) {
      throw new IOException("This is not a DFS");
    }

It seems like by design the secondary name node cannot handle any scheme besides hdfs://.
Is this a small bug, or can I really not run a secondarynamenode by design with an S3FileSystem
(or any other scheme for that matter?)

Thanks,
Brian

Mime
View raw message