Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id BA6ED17E49 for ; Thu, 16 Apr 2015 10:56:44 +0000 (UTC) Received: (qmail 4098 invoked by uid 500); 16 Apr 2015 10:56:29 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 3983 invoked by uid 500); 16 Apr 2015 10:56:29 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 3973 invoked by uid 99); 16 Apr 2015 10:56:29 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 16 Apr 2015 10:56:29 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_SOFTFAIL,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: softfail (nike.apache.org: transitioning domain of silvan@quobyte.com does not designate 209.85.212.181 as permitted sender) Received: from [209.85.212.181] (HELO mail-wi0-f181.google.com) (209.85.212.181) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 16 Apr 2015 10:56:04 +0000 Received: by wizk4 with SMTP id k4so189593570wiz.1 for ; Thu, 16 Apr 2015 03:53:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to :content-type; bh=+/Hp12IIJIdPbK9bYt+D+ohcyHKFADhAhRHPV5+KAIA=; b=Irot/72dCBnGYqg6oOnyQmwoyvqV2ocWXphcmNeSpA3SEs6EoatSFI9cBdo13AYjNy O5RL9jcje5TSQ4xjAZb0p5MAx7CeYih7ZKzeCT9DMrebKBfgbzIrg+wLWWA6laUd5/dU c25Huz2xdrNQ35pVCCVgxp8EVxlZSNl39F9QVVerNtTJCEivTgmKOIYlk5Z+3b4RuZ6b M3ObGUV0GKif8aaMm6LSUgw11F1cDxinC8nQTHoyIWvqyTCvGUjQTilH+kFsCOnK+Sms 6G5nz6VN8oA4DU8JiL5UNsQFJIs75XH6Z2wseAOkvFTkhS0u5XcleNglFOcUE/sLalAZ 6hxg== X-Gm-Message-State: ALoCoQnooeEdNCcduW4gTOxPG3oBAipJaLyRhtBQe7zq7ScpkLIzCvLV37AwCjIq6ymFU8mcgiIo90o2GvmowqZbZ7A4DbMtMePrWcLTrYHHXh7Aows8g3w= MIME-Version: 1.0 X-Received: by 10.194.122.196 with SMTP id lu4mr59541701wjb.154.1429181628019; Thu, 16 Apr 2015 03:53:48 -0700 (PDT) Received: by 10.27.91.130 with HTTP; Thu, 16 Apr 2015 03:53:47 -0700 (PDT) Date: Thu, 16 Apr 2015 12:53:47 +0200 Message-ID: Subject: Question on configuring Hadoop 2.6.0 with a different filesystem From: Silvan Kaiser To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e0122aec2420b070513d547d8 X-Virus-Checked: Checked by ClamAV on apache.org --089e0122aec2420b070513d547d8 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hello! I'm rather new to hadoop and currently testing the integration of a new file system as replacement for HDFS, similiar to integrations like GlusterFS, GPFS, Ceph, etc. . I do have an implementation of the FileSystem class but a basic issue trying to test it, This seems to be rooted in a misconfiguration of my setup: Upon NameNode startup the fs.defaultFS settings is rejected because the scheme does not match 'hdfs', which is true as i'm using a scheme for our plugin. Log output: ~/tmp/hadoop-2.6.0>sbin/start-dfs.sh Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured. Starting namenodes on [] localhost: starting namenode, logging to /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-namenode-kaisers.out localhost: starting datanode, logging to /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-datanode-kaisers.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-secondarynamenode-kaiser= s.out 0.0.0.0: Exception in thread "main" java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): quobyte:// prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/ is not of scheme 'hdfs'. 0.0.0.0: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:42= 9) 0.0.0.0: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:41= 3) 0.0.0.0: at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.= java:406) 0.0.0.0: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(Seconda= ryNameNode.java:229) 0.0.0.0: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.(SecondaryNa= meNode.java:192) 0.0.0.0: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryName= Node.java:671) Now the first error message states that namenode adress settings are missing but i could find no example where these are set for a different file system. All examples only set fs.defaultFS but this seems not to be sufficient. The setup is pseudo-distributed as in the hadoop documentation, core-site.xml contains these properties: fs.default.name quobyte:// prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/ fs.defaultFS quobyte:// prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/ fs.quobyte.impl com.quobyte.hadoop.QuobyteFileSystem Any comments or e.g. links to documentation regarding this would be great. Thansk for reading & best regards Silvan Kaiser --=20 Quobyte GmbH Boyenstr. 41 - 10115 Berlin-Mitte - Germany +49-30-814 591 800 - www.quobyte.com Amtsgericht Berlin-Charlottenburg, HRB 149012B Management board: Dr. Felix Hupfeld, Dr. Bj=C3=B6rn Kolbeck, Dr. Jan Stende= r --=20 -- *Quobyte* GmbH Boyenstr. 41 - 10115 Berlin-Mitte - Germany +49-30-814 591 800 - www.quobyte.com Amtsgericht Berlin-Charlottenburg, HRB 149012B management board: Dr. Felix Hupfeld, Dr. Bj=C3=B6rn Kolbeck, Dr. Jan Stende= r --089e0122aec2420b070513d547d8 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hello!
I'm rather new to hadoop and currently test= ing the integration of a new file system as replacement for HDFS, similiar = to integrations like GlusterFS, GPFS, Ceph, etc. . I do have an implementat= ion of the FileSystem class but a basic issue trying to test it, This seems= to be rooted in a misconfiguration of my setup:

U= pon NameNode startup the fs.defaultFS settings is rejected because the sche= me does not match 'hdfs', which is true as i'm using a scheme f= or our plugin. Log output:

~/tmp/hadoop-2.6.0= >sbin/start-dfs.sh=C2=A0
Incorrect configuration: namenode add= ress dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not con= figured.
Starting namenodes on []
localhost: starting n= amenode, logging to /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-name= node-kaisers.out
localhost: starting datanode, logging to /home/k= aisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-datanode-kaisers.out
= Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/kaisers/tmp/hado= op-2.6.0/logs/hadoop-kaisers-secondarynamenode-kaisers.out
0.0.0.0: Exception in thread "main" java= .lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.= defaultFS): quobyte://prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/ is not of scheme 'hdfs'.
0.= 0.0.0: =C2=A0 =C2=A0 =C2=A0 =C2=A0at org.apache.hadoop.hdfs.server.name= node.NameNode.getAddress(NameNode.java:429)
0.0.0.0: =C2=A0 =C2=A0 =C2=A0 =C2=A0at org.apache.hadoop.hdfs.ser= ver.namenode.NameNode.getAddress(NameNode.java:413)
0.0.0.0: =C2=A0 =C2=A0 =C2=A0 =C2=A0at org.apache.hadoop.= hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:406)
0.0.0.0: =C2=A0 =C2=A0 =C2=A0 =C2=A0at org= .apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryN= ameNode.java:229)
0.0.0.0: =C2=A0 = =C2=A0 =C2=A0 =C2=A0at org.apache.hadoop.hdfs.server.namenode.SecondaryName= Node.<init>(SecondaryNameNode.java:192)
0.0.0.0: =C2=A0 =C2=A0 =C2=A0 =C2=A0at org.apache.hadoop.hdfs.s= erver.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)

Now the first error message states that namenode adre= ss settings are missing but i could find no example where these are set for= a different file system. All examples only set fs.defaultFS but this seems= not to be sufficient.

The setup is pseudo-dis= tributed as in the hadoop documentation, core-site.xml contains these prope= rties:
=C2=A0 =C2=A0 <property>
=C2=A0 =C2= =A0 =C2=A0 =C2=A0 <name>fs.default= .name</name>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <!-- <va= lue>hdfs://localhost:9000</value> -->
=C2=A0 =C2=A0 = =C2=A0 =C2=A0 <value>quobyte://prod.corp.quobyte.com:7861/users/kaisers= /hadoop-test/</value>
=C2=A0 =C2=A0 </property>
=C2=A0 =C2=A0 <property>
=C2=A0 =C2=A0 =C2=A0 =C2= =A0 <name>fs.defaultFS</name>
=C2=A0 =C2=A0 =C2=A0 = =C2=A0 <!-- <value>hdfs://localhost:9000</value> -->
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <value>quobyte://prod.corp.quobyte.co= m:7861/users/kaisers/hadoop-test/</value>
=C2=A0 =C2=A0= </property>
=C2=A0 =C2=A0 <property>
=C2= =A0 =C2=A0 =C2=A0 =C2=A0 <name>fs.quobyte.impl</name>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <value>com.quobyte.hadoop.QuobyteFileSys= tem</value>
=C2=A0 =C2=A0 </property>

Any comments or e.g. links to documentation regarding this = would be great.

Thansk for reading & best rega= rds
Silvan Kaiser


--
=
Quobyte GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 -=C2=A0www.quobyte.com<http://www.q= uobyte.com/>=
Amtsgericht Berlin-Charlottenburg, HRB 149012B<= br style=3D"color:rgb(0,0,0);font-size:small">Management board: Dr. Felix Hupfeld, Dr. Bj=C3=B6rn Kol= beck, Dr. Jan Stender

=
--
Quobyte=C2=A0GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800=C2=A0-=C2=A0www.quobyte.com
Amtsge= richt Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hup= feld, Dr. Bj=C3=B6rn Kolbeck, Dr. Jan Stender
--089e0122aec2420b070513d547d8--