hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stephen Watt <sw...@redhat.com>
Subject [DISCUSS] - Committing client code to 3rd Party FileSystems within Hadoop Common
Date Thu, 23 May 2013 18:17:18 GMT
(Resending - I think the first time I sent this out it got lost within all the ByLaws voting)

Hi Folks

My name is Steve Watt and I am presently working on enabling glusterfs to be used as a Hadoop
FileSystem. Most of the work thus far has involved developing a Hadoop FileSystem plugin for
glusterfs. I'm getting to the point where the plugin is becoming stable and I've been trying
to understand where the right place is to host/manage/version it. 

Steve Loughran was kind enough to point out a few past threads in the community (such as http://lucene.472066.n3.nabble.com/Need-to-add-fs-shim-to-use-QFS-td4012118.html)
that show a project disposition to move away from Hadoop Common containing client code (plugins)
for 3rd party FileSystems. This makes sense and allows the filesystem plugin developer more
autonomy as well as reduces Hadoop Common's dependence on 3rd Party libraries. 

Before I embark down that path, can the PMC/Committers verify that the preference is still
to have client code for 3rd Party FileSystems hosted and managed outside of Hadoop Common?

Steve Watt

View raw message