hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris D <chris....@gmail.com>
Subject Support for clustered, shared POSIX FS
Date Thu, 01 Jul 2010 18:32:09 GMT
Hi all,



I’d like to create a new URI for a clustered POSIX-compliant filesystem
shared between all nodes. A number of such filesystems currently exist
(think HDFS w/o the POSIX incompliance). We can, of course, run HDFS on top
of such a file system, but it adds an extra unnecessary and inefficient
layer. Why have a master retrieve a set of data from a clustered,
distributed FS, only to distribute it back out to the same cluster but on a
different distributed FS (HDFS)?



In the new URI I seek to create, each MapReduce slave would look for input
data from a seemingly local file:///. Because we’re assuming
POSIX-compliance, the LocalFileSystem seems to be the best starting point.



Please let me know of any warnings or errors you see in this. Any advice is
strongly appreciated as well, as the source tree of Hadoop is new to me and
intimidating.



Best,

--Chris

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message