Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1E6A9102C9 for ; Fri, 16 Aug 2013 09:38:34 +0000 (UTC) Received: (qmail 60293 invoked by uid 500); 16 Aug 2013 09:38:26 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 59959 invoked by uid 500); 16 Aug 2013 09:38:25 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 59952 invoked by uid 99); 16 Aug 2013 09:38:22 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 16 Aug 2013 09:38:22 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of harsh@cloudera.com designates 209.85.223.172 as permitted sender) Received: from [209.85.223.172] (HELO mail-ie0-f172.google.com) (209.85.223.172) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 16 Aug 2013 09:38:17 +0000 Received: by mail-ie0-f172.google.com with SMTP id 17so3101774iea.17 for ; Fri, 16 Aug 2013 02:37:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:content-type; bh=/DKyAJ9NhuyQovXDTfQtskguaDVRqjekVWT6SVpqp+c=; b=DaAf4lDkyosf1TuDXYURzBTaKBt6qf9dh9Gg/PA2LbEKQueJXiSzQ5twF0cWek4cA8 F055GS5dvGrB6hdltcF8do0xTuLrk1yEmKkLMPA4VRzN5sxuSgr+LcoZYwwfBBvYi1Ma zoCkejGOSrlVry4jo1n6nYSTizeIuPLa2n0UwUrHvl2Zn7vYmzlp9c/2Ojutdz2eiyI3 LKv/WGOCOAqtBjzZrb7sO3WXhsL+ZghTHWrNs3gCCoML4tMXgxTfEYdC6kLzHSuXwc+E kfm1fPt9bniPGdubGbaWDhH9fvEmfsruYqtXdNMTewMcQmlrGN8WxF6Ks5vv1VDB22VP afcg== X-Gm-Message-State: ALoCoQkI4M8KDjT2SrfquMBLCbQyLZAW3ZqElI9NeHdJsSPIw3RLN/AzZthg/Ccxbf1OKcuMyJiK X-Received: by 10.50.44.35 with SMTP id b3mr359753igm.7.1376645876394; Fri, 16 Aug 2013 02:37:56 -0700 (PDT) MIME-Version: 1.0 Received: by 10.50.95.199 with HTTP; Fri, 16 Aug 2013 02:37:36 -0700 (PDT) In-Reply-To: References: From: Harsh J Date: Fri, 16 Aug 2013 15:07:36 +0530 Message-ID: Subject: Re: " No FileSystem for scheme: hdfs " in namenode HA To: "" Content-Type: text/plain; charset=ISO-8859-1 X-Virus-Checked: Checked by ClamAV on apache.org You require hadoop-hdfs dependency for HDFS FS to get initialized. Your issue lies in how you're running the application, not your code. If you use Maven, include "hadoop-client" dependency to get all the required dependency for a hadoop client program. Otherwise, run your program with "hadoop jar", after ensuring "hadoop classpath" is indeed including your HDFS directories too. On Fri, Aug 16, 2013 at 12:03 PM, ch huang wrote: > hi,all i setup namenode HA hadoop cluster > > and write some demo code > > import java.io.FileNotFoundException; > import java.io.IOException; > import java.net.URI; > import org.apache.hadoop.conf.Configuration; > import org.apache.hadoop.fs.FSDataOutputStream; > import org.apache.hadoop.fs.FileStatus; > import org.apache.hadoop.fs.FileSystem; > import org.apache.hadoop.fs.Path; > > public class TestConnect { > private static void appendToHdfs(String content,String dst) throws > FileNotFoundException,IOException { > > Configuration conf = new Configuration(); > conf.set("dfs.replication", "2"); > // System.out.println("append is : "+conf.get("dfs.support.append")); > // System.out.println("append is : "+conf.get("dfs.name.dir")); > FileSystem fs = FileSystem.get(URI.create(dst), conf); > FSDataOutputStream out = fs.append(new Path(dst)); > int readLen = content.getBytes().length; > > out.write(content.getBytes(), 0, readLen); > > out.close(); > fs.close(); > } > > public static void createNewHDFSFile(String toCreateFilePath, String > content) throws IOException > { > Configuration config = new Configuration(); > FileSystem hdfs = > FileSystem.get(URI.create(toCreateFilePath),config); > > FSDataOutputStream os = hdfs.create(new Path(toCreateFilePath)); > os.write(content.getBytes("UTF-8")); > > os.close(); > > hdfs.close(); > } > > public static void listAll(String dir) throws IOException > { > Configuration conf = new Configuration(); > FileSystem fs = FileSystem.get(URI.create(dir),conf); > > FileStatus[] stats = fs.listStatus(new Path(dir)); > > for(int i = 0; i < stats.length; ++i) > { > if (stats[i].isFile()) > { > // regular file > System.out.println(stats[i].getPath().toString()); > } > else if (stats[i].isDirectory()) > { > // dir > System.out.println(stats[i].getPath().toString()); > } > else if(stats[i].isSymlink()) > { > // is s symlink in linux > System.out.println(stats[i].getPath().toString()); > } > > } > fs.close(); > } > public static void main(String[] args) { > > // TODO Auto-generated method stub > try { > > createNewHDFSFile("hdfs://mycluster/alex","mycluster"); > listAll("hdfs://mycluster/alex"); > Configuration config = new Configuration(); > System.out.println("append is : "+config.get("dfs.hosts")); > } catch (FileNotFoundException e) { > // TODO Auto-generated catch block > e.printStackTrace(); > } catch (IOException e) { > // TODO Auto-generated catch block > e.printStackTrace(); > } > } > > } > and client configuration file :hdfs-site.xml > > > fs.defaultFS > hdfs://mycluster > > > ha.zookeeper.quorum > node1:2181,node2:2181,node3:2181 > > > > dfs.nameservices > mycluster > > > dfs.ha.namenodes.mycluster > nn1,nn2 > > > dfs.namenode.rpc-address.mycluster.nn1 > node1:8020 > > > dfs.namenode.rpc-address.mycluster.nn2 > node2:8020 > > > dfs.namenode.shared.edits.dir > qjournal://node1:8485;node2:8485;node3:8485/mycluster > > > > dfs.client.failover.proxy.provider.mycluster > > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > > > when i run the test ,i get some error information,any one can help? > > log4j:WARN No appenders could be found for logger > (org.apache.hadoop.metrics2.lib.MutableMetricsFactory). > log4j:WARN Please initialize the log4j system properly. > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for > more info. > java.io.IOException: No FileSystem for scheme: hdfs > at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2296) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2303) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:87) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2342) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2324) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:351) > at TestConnect.createNewHDFSFile(TestConnect.java:35) > at TestConnect.main(TestConnect.java:80) -- Harsh J