hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Trivial Update of "MountableHDFS" by BrockNoland
Date Wed, 11 Jan 2012 23:40:13 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "MountableHDFS" page has been changed by BrockNoland:
http://wiki.apache.org/hadoop/MountableHDFS?action=diff&rev1=15&rev2=16

  
  These projects (enumerated below) allow HDFS to be mounted (on most flavors of Unix) as
a standard file system using the mount command.  Once mounted, the user can operate on an
instance of hdfs using standard Unix utilities such as 'ls', 'cd', 'cp', 'mkdir', 'find',
'grep', or use standard Posix libraries like open, write, read, close from C, C++, Python,
Ruby, Perl, Java, bash, etc. 
  
- They are all based on the Filesystem in Userspace project FUSE ([[http://fuse.sourceforge.net/]]).
Although the Webdav-based one can be used with other webdav tools, but requires FUSE to actually
mount.
+ All, except HDFS NFS Proxy, are based on the Filesystem in Userspace project FUSE ([[http://fuse.sourceforge.net/]]).
Although the Webdav-based one can be used with other webdav tools, but requires FUSE to actually
mount.
  
  Note that a great thing about FUSE is you can export a fuse mount using NFS, so you can
use fuse-dfs to mount hdfs on one machine and then export that using NFS. The bad news is
that fuse relies on the kernel's inode cache since fuse is path-based and not inode-based.
If an inode is flushed from the kernel cache on the server, NFS clients get hosed; they try
doing a read or an open with an inode the server doesn't have a mapping for and thus NFS chokes.
So, while the NFS route gets you started quickly, for production it is more robust to automount
fuse on all the machines you want access to hdfs from.
  
@@ -27, +27 @@

   * hdfs-fuse - a google code project is very similar to contrib/fuse-dfs
   * webdav - hdfs exposed as a webdav resource
   * mapR - contains a closed source hdfs compatible file system that supports read/write
NFS access
+  * [[https://github.com/brockn/hdfs-nfs-proxy|HDFS NFS Proxy]] - exports HDFS as NFS without
use of fuse 
  
  == Supported Operating Systems ==
  
@@ -175, +176 @@

  production, so, quite naturally we are very interested in any feedback.
  Also I'd want to thank the authors of the HADOOP-496 for the terrific work.
  
+ == HDFS NFS Proxy ==
+ 
+ Exports the HDFS system as NFS. Written entirely in java and uses the HDFS java
+ API directly.
+ 
+ https://github.com/brockn/hdfs-nfs-proxy
+ 

Mime
View raw message