Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0D12B7B11 for ; Fri, 14 Oct 2011 19:00:20 +0000 (UTC) Received: (qmail 66711 invoked by uid 500); 14 Oct 2011 19:00:19 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 66664 invoked by uid 500); 14 Oct 2011 19:00:19 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 66656 invoked by uid 99); 14 Oct 2011 19:00:19 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 14 Oct 2011 19:00:19 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [98.139.90.231] (HELO nm3-vm1.bullet.mail.sp2.yahoo.com) (98.139.90.231) by apache.org (qpsmtpd/0.29) with SMTP; Fri, 14 Oct 2011 19:00:11 +0000 Received: from [98.139.91.62] by nm3.bullet.mail.sp2.yahoo.com with NNFMP; 14 Oct 2011 18:59:50 -0000 Received: from [98.139.91.47] by tm2.bullet.mail.sp2.yahoo.com with NNFMP; 14 Oct 2011 18:58:09 -0000 Received: from [127.0.0.1] by omp1047.mail.sp2.yahoo.com with NNFMP; 14 Oct 2011 18:58:09 -0000 X-Yahoo-Newman-Property: ymail-5 X-Yahoo-Newman-Id: 294765.17521.bm@omp1047.mail.sp2.yahoo.com Received: (qmail 21695 invoked by uid 60001); 14 Oct 2011 18:58:09 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1318618688; bh=Ir8bHmQhXgaXdVffPWWAthMO8Cz3PT+RaE6JZDIyQjo=; h=X-YMail-OSG:Received:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=XaFoeAyOyB0ElMw8Ouxc6ODkbA7lvdJk0InvnwdojioI6cT0R/ZHoKhcobyyZQClx5PowUzxaZGg4Ou/OtOx/gIP8rmCIO6raipwIIMgM/MVR2iau4KizXWT0CQyRShxF+P4Bk30TDCzoQiIblFJ3p6a7qGrCFjrWFkTa+p5Kh0= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=y64UnZHpMCLYeLyrw1/A5g+t1uZirAM5mXpLy36ZGQ68ZnlM55ptc8sR2etNx2+2DG9we+e2jMTm7xuwHkwrk53/zvq5CqxjOkTHBzhf3zczTsaDEjesScK2CQjo73kh1BCZ11HV0k4Mv1k/D4vrJbwNTiakvsn5wzXPCXD+W1w=; X-YMail-OSG: EB_3dr8VM1nNkrZx6_AvopHAMmTdmCktmrskeB6U04Cbtqe 29pU3R2LbtnSBDljBUsEOHhy.Cn.SQO2DgPTxVJuS9YvbbT5KlZ8b6kkxgNo n8h47Y63aMKoPInNaejyM1gDn8YU8BtDnGly285aEqWDxD5A_WocP6lhQoeM PXRKmFxBV22fwRugWJ9ILRxPbbg8l.MWdbv92TmCmP8Jw8zpc5Vt0kq93l.7 u24ioIxga1nJoYt_5TABSotzKezGyHvA.bUydtQ0fp3TJdUr_ZMNTPD8ldlZ 1HUFIqRTQrgzfJP33zdC0u48XsXdnNo.XEFsdY5mqRHQmOlacQK36C6gfqQq k_5Hp6qz44ntjFU.HZrbUuumaj0ncDZA8CDImjPKl1EQkvZcr3lUq0Q.XFS8 _07PO.CqslsY8076KCyXS88eQJdju2ds_Ejz3Yiw36qQi.gT_uxxLIU.0vPR V7nUn5bOWpA-- Received: from [69.53.237.126] by web110704.mail.gq1.yahoo.com via HTTP; Fri, 14 Oct 2011 11:58:08 PDT X-Mailer: YahooMailWebService/0.8.114.317681 References: <42382C4C8D604FFFB009C3DD5283131E@china.huawei.com> Message-ID: <1318618688.21051.YahooMailNeo@web110704.mail.gq1.yahoo.com> Date: Fri, 14 Oct 2011 11:58:08 -0700 (PDT) From: Bharath Mundlapudi Reply-To: Bharath Mundlapudi Subject: Re: 0.23 & trunk tars, we'll we publishing 1 tar per component or a single tar? What about source tar? To: "mapreduce-dev@hadoop.apache.org" , "common-dev@hadoop.apache.org" , "hdfs-dev@hadoop.apache.org" , "raviteja@huawei.com" In-Reply-To: <42382C4C8D604FFFB009C3DD5283131E@china.huawei.com> MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="654292850-1813074277-1318618688=:21051" --654292850-1813074277-1318618688=:21051 Content-Type: text/plain; charset=us-ascii Other approach would be asking which tar to build. mapred-tar (mapred and common) hdfs-tar (hdfs and common) hadoop-tar (all) In this case, hbase can just use hdfs-tar. -Bharath ________________________________ From: Ravi Teja To: mapreduce-dev@hadoop.apache.org; common-dev@hadoop.apache.org; hdfs-dev@hadoop.apache.org Sent: Wednesday, October 12, 2011 9:43 PM Subject: RE: 0.23 & trunk tars, we'll we publishing 1 tar per component or a single tar? What about source tar? I feel #4 as a better option. Regards, Ravi Teja -----Original Message----- From: Alejandro Abdelnur [mailto:tucu@cloudera.com] Sent: Wednesday, October 12, 2011 9:38 PM To: common-dev@hadoop.apache.org; mapreduce-dev@hadoop.apache.org; hdfs-dev@hadoop.apache.org Subject: 0.23 & trunk tars, we'll we publishing 1 tar per component or a single tar? What about source tar? Currently common, hdfs and mapred create partial tars which are not usable unless they are stitched together into a single tar. With HADOOP-7642 the stitching happens as part of the build. The build currently produces the following tars: 1* common TAR 2* hdfs (partial) TAR 3* mapreduce (partial) TAR 4* hadoop (full, the stitched one) TAR #1 on its own does not run anything, #2 and #3 on their own don't run. #4 runs hdfs & mapreduce. Questions: Q1. Does it make sense to publish #1, #2 & #3? Or #4 is sufficient and you start the services you want (i.e. Hbase would just use HDFS)? Q2. And what about a source TAR, does it make sense to have source TAR per component or a single TAR for the whole? For simplicity (for the build system and for users) I'd prefer a single binary TAR and a single source TAR. Thanks. Alejandro --654292850-1813074277-1318618688=:21051--