Return-Path: X-Original-To: apmail-hadoop-mapreduce-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A717B9369 for ; Thu, 13 Oct 2011 05:12:56 +0000 (UTC) Received: (qmail 42591 invoked by uid 500); 13 Oct 2011 05:12:55 -0000 Delivered-To: apmail-hadoop-mapreduce-dev-archive@hadoop.apache.org Received: (qmail 42527 invoked by uid 500); 13 Oct 2011 05:12:55 -0000 Mailing-List: contact mapreduce-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-dev@hadoop.apache.org Delivered-To: mailing list mapreduce-dev@hadoop.apache.org Delivered-To: moderator for mapreduce-dev@hadoop.apache.org Received: (qmail 40127 invoked by uid 99); 13 Oct 2011 05:11:44 -0000 X-ASF-Spam-Status: No, hits=-2.3 required=5.0 tests=RCVD_IN_DNSWL_MED,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of maheswara@huawei.com designates 206.16.17.211 as permitted sender) Date: Thu, 13 Oct 2011 10:11:15 +0500 From: Uma Maheswara Rao G 72686 Subject: Re: 0.23 & trunk tars, we'll we publishing 1 tar per component or a single tar? What about source tar? In-reply-to: <4E95CD94.1040904@hortonworks.com> To: common-dev@hadoop.apache.org Cc: hdfs-dev@hadoop.apache.org, Eric Yang , mapreduce-dev@hadoop.apache.org Message-id: MIME-version: 1.0 X-Mailer: iPlanet Messenger Express 5.2 HotFix 2.14 (built Aug 8 2006) Content-type: text/plain; charset=us-ascii Content-language: en Content-transfer-encoding: 7BIT Content-disposition: inline X-Accept-Language: en Priority: normal References: <4E95CD94.1040904@hortonworks.com> +1 for option 4. Let the User starts required services from it. Regards, Uma ----- Original Message ----- From: giridharan kesavan Date: Wednesday, October 12, 2011 11:24 pm Subject: Re: 0.23 & trunk tars, we'll we publishing 1 tar per component or a single tar? What about source tar? To: hdfs-dev@hadoop.apache.org Cc: Eric Yang , mapreduce-dev@hadoop.apache.org, common-dev@hadoop.apache.org > +1 for option 4 > > > On 10/12/11 9:50 AM, Eric Yang wrote: > > Option #4 is the most practical use case for making a release. > For bleeding edge developers, they would prefer to mix and match > different version of hdfs and mapreduce. Hence, it may be good to > release the single tarball for release, but continue to support > component tarballs for developers and rpm/deb packaging. In case, > someone wants to run hdfs + hbase, but not mapreduce for > specialized application. Component separation tarball should > continue to work for rpm/deb packaging. > > > > regards, > > Eric > > > > On Oct 12, 2011, at 9:30 AM, Prashant Sharma wrote: > > > >> I support the idea of having 4 as additional option. > >> > >> On Wed, Oct 12, 2011 at 9:37 PM, Alejandro > Abdelnur wrote: > >>> Currently common, hdfs and mapred create partial tars which > are not usable > >>> unless they are stitched together into a single tar. > >>> > >>> With HADOOP-7642 the stitching happens as part of the build. > >>> > >>> The build currently produces the following tars: > >>> > >>> 1* common TAR > >>> 2* hdfs (partial) TAR > >>> 3* mapreduce (partial) TAR > >>> 4* hadoop (full, the stitched one) TAR > >>> > >>> #1 on its own does not run anything, #2 and #3 on their own > don't run. #4 > >>> runs hdfs& mapreduce. > >>> > >>> Questions: > >>> > >>> Q1. Does it make sense to publish #1, #2& #3? Or #4 is > sufficient and you > >>> start the services you want (i.e. Hbase would just use HDFS)? > >>> > >>> Q2. And what about a source TAR, does it make sense to have > source TAR per > >>> component or a single TAR for the whole? > >>> > >>> > >>> For simplicity (for the build system and for users) I'd prefer > a single > >>> binary TAR and a single source TAR. > >>> > >>> Thanks. > >>> > >>> Alejandro > >>> > >> > >> > >> -- > >> > >> Prashant Sharma > >> Pramati Technologies > >> Begumpet, Hyderabad. > > > > > -- > -Giri > >