hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nigel Daley <nda...@yahoo-inc.com>
Subject Re: [VOTE] Should we create sub-projects for HDFS and Map/Reduce?
Date Thu, 07 Aug 2008 23:22:20 GMT
So we'll need to create and maintain 3 patch processes, one for each  
component?  Not a trivial amount of work given the way the patch  
process is currently structured.

How will unit tests be divided?  For instance, will all three have to  
have MiniDFSCluster and other shared test infrastructure?

We can use Ivy now to manage dependencies on outside libraries.
We can build separate jars for mapred, hdfs, and core right now.
We can use email filters to reduce inbox emails.
We can use TestNG to categorize our tests and narrow the number of  
unit tests run for each component.

-1 until I better understand the benefit of making the split.

Nige

On Aug 5, 2008, at 10:18 PM, Owen O'Malley wrote:

> I think the time has come to split Hadoop Core into three pieces:
>
>  1. Core (src/core)
>  2. HDFS (src/hdfs)
>  3. Map/Reduce (src/mapred)
>
> There will be lots of details to work out, such as what we do with  
> tools and contrib, but I think it is a good idea. This will create  
> separate jiras and mailing lists for HDFS and map/reduce, which will  
> make the community much more approachable. I would propose that we  
> wait until 0.19.0 is released to give us time to plan the split.
>
> -- Owen


Mime
View raw message