hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric Baldeschwieler <eri...@hortonworks.com>
Subject Re: where do side-projects go in trunk now that contrib/ is gone?
Date Fri, 01 Mar 2013 05:02:16 GMT
I agree with where this is going.

Swift and S3 are compelling enough that they should be in the source tree IMO.  Hadoop needs
to play well with common platforms such as the major clouds.

On the other hand, it would be great if we could segregate them enough that each builds is
its own JAR and folks have the option of not pulling their dependancies in and not building
/ testing them in a clean way.

On Feb 14, 2013, at 6:05 AM, Steve Loughran <steve.loughran@gmail.com> wrote:

> On 13 February 2013 20:07, Alejandro Abdelnur <tucu@cloudera.com> wrote:
>> Steve,
>> I like the idea of testing all FS for expected behavior, in HttpFS we are
>> already doing something along these lines testing HttpFS against HDFS and
>> LocalFS. Also testing 2 WebHDFS clients.
> excellent. I look forward to your test contributions!
>> Regarding where these 'extensions' would go, well, we could have something
>> like share/hadoop/common/filesystem-ext/s3 and whoever wants to use s3
>> would have to symlink those JARs into common/lib. Or having a way to
>> activate via a HADOOP_COMMON_FS_EXT env which extension JARs to pick up. I
>> guess the BigTop guys could help defining this magic.
>> I was thinking of less of "where should it go at install time" and "where
> do we keep it in SVN"
> at install time you'd need the JAR + any dependencies on the daemon paths
> -if it is to be everywhere- or uploaded with a job into distributed cache.
> Testing that the latter works with filesystem.get() would be something to
> play with.
> & yes, bigtop could help there

View raw message