hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: 0.23 & trunk tars, we'll we publishing 1 tar per component or a single tar? What about source tar?
Date Thu, 13 Oct 2011 12:52:02 GMT

+1 for 4

A separate "source tree" JAR could include all the source tree without 
any JARs.

also, in the M2 repositories, all JARs with src JARs and no 
log4.properties in the files.

> -----Original Message-----
> From: Alejandro Abdelnur [mailto:tucu@cloudera.com]
> Sent: Wednesday, October 12, 2011 9:38 PM
> To: common-dev@hadoop.apache.org; mapreduce-dev@hadoop.apache.org;
> hdfs-dev@hadoop.apache.org
> Subject: 0.23&  trunk tars, we'll we publishing 1 tar per component or a
> single tar? What about source tar?
>
> Currently common, hdfs and mapred create partial tars which are not usable
> unless they are stitched together into a single tar.
>
> With HADOOP-7642 the stitching happens as part of the build.
>
> The build currently produces the following tars:
>
> 1* common TAR
> 2* hdfs (partial) TAR
> 3* mapreduce (partial) TAR
> 4* hadoop (full, the stitched one) TAR
>
> #1 on its own does not run anything, #2 and #3 on their own don't run. #4
> runs hdfs&  mapreduce.
>
> Questions:
>
> Q1. Does it make sense to publish #1, #2&  #3? Or #4 is sufficient and you
> start the services you want (i.e. Hbase would just use HDFS)?
>
> Q2. And what about a source TAR, does it make sense to have source TAR per
> component or a single TAR for the whole?
>
>
> For simplicity (for the build system and for users) I'd prefer a single
> binary TAR and a single source TAR.
>
> Thanks.
>
> Alejandro
>


Mime
View raw message