hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Silvan Kaiser <sil...@quobyte.com>
Subject Question on configuring Hadoop 2.6.0 with a different filesystem
Date Thu, 16 Apr 2015 10:53:47 GMT
I'm rather new to hadoop and currently testing the integration of a new
file system as replacement for HDFS, similiar to integrations like
GlusterFS, GPFS, Ceph, etc. . I do have an implementation of the FileSystem
class but a basic issue trying to test it, This seems to be rooted in a
misconfiguration of my setup:

Upon NameNode startup the fs.defaultFS settings is rejected because the
scheme does not match 'hdfs', which is true as i'm using a scheme for our
plugin. Log output:

Incorrect configuration: namenode address dfs.namenode.servicerpc-address
or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: starting namenode, logging to
localhost: starting datanode, logging to
Starting secondary namenodes [] starting secondarynamenode, logging to
/home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-secondarynamenode-kaisers.out Exception in thread "main" java.lang.IllegalArgumentException:
Invalid URI for NameNode address (check fs.defaultFS): quobyte://
prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/ is not of scheme
'hdfs'.        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:429)        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:413)        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:406)        at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:229)        at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192)        at

Now the first error message states that namenode adress settings are
missing but i could find no example where these are set for a different
file system. All examples only set fs.defaultFS but this seems not to be

The setup is pseudo-distributed as in the hadoop documentation,
core-site.xml contains these properties:
        <!-- <value>hdfs://localhost:9000</value> -->
        <!-- <value>hdfs://localhost:9000</value> -->

Any comments or e.g. links to documentation regarding this would be great.

Thansk for reading & best regards
Silvan Kaiser

Quobyte GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com<http://www.quobyte.com/>
Amtsgericht Berlin-Charlottenburg, HRB 149012B
Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender


*Quobyte* GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender

View raw message