Return-Path: Delivered-To: apmail-hadoop-common-dev-archive@www.apache.org Received: (qmail 64667 invoked from network); 16 Jul 2009 10:54:51 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 16 Jul 2009 10:54:51 -0000 Received: (qmail 86706 invoked by uid 500); 16 Jul 2009 10:55:56 -0000 Delivered-To: apmail-hadoop-common-dev-archive@hadoop.apache.org Received: (qmail 86648 invoked by uid 500); 16 Jul 2009 10:55:56 -0000 Mailing-List: contact common-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-dev@hadoop.apache.org Delivered-To: mailing list common-dev@hadoop.apache.org Received: (qmail 86638 invoked by uid 99); 16 Jul 2009 10:55:56 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 16 Jul 2009 10:55:56 +0000 X-ASF-Spam-Status: No, hits=2.0 required=10.0 tests=NO_RDNS_DOTCOM_HELO,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [69.147.107.20] (HELO mrout1-b.corp.re1.yahoo.com) (69.147.107.20) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 16 Jul 2009 10:55:43 +0000 Received: from sp1-ex07cas01.ds.corp.yahoo.com (sp1-ex07cas01.ds.corp.yahoo.com [216.252.116.137]) by mrout1-b.corp.re1.yahoo.com (8.13.8/8.13.8/y.out) with ESMTP id n6GAsBLq047712 for ; Thu, 16 Jul 2009 03:54:12 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; s=serpent; d=yahoo-inc.com; c=nofws; q=dns; h=received:from:to:date:subject:thread-topic:thread-index: message-id:references:in-reply-to:accept-language: content-language:x-ms-has-attach:x-ms-tnef-correlator:acceptlanguage: content-type:content-transfer-encoding:mime-version; b=LGISFxkXsFWboHuZ5aID0CH4bcn8vjr3nKTciZgS8gG3E2mq0K+BeSYNbYdvmYcl Received: from SP1-EX07VS02.ds.corp.yahoo.com ([216.252.116.135]) by sp1-ex07cas01.ds.corp.yahoo.com ([216.252.116.137]) with mapi; Thu, 16 Jul 2009 03:54:11 -0700 From: "Giridharan Kesavan" To: "common-dev@hadoop.apache.org" Date: Thu, 16 Jul 2009 03:54:03 -0700 Subject: RE: Developing cross-component patches post-split Thread-Topic: Developing cross-component patches post-split Thread-Index: Acn64rPQmd3zcLf8RIWPPTQ1INjnlQAVCH29ArJUsvA= Message-ID: References: <78CA5F55-D31A-42FD-8312-18BA42D085B2@yahoo-inc.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org Based on the discussions we have the first version of the patch uploaded to= jira HADOOP-5107 This patch can be used for publishing and resolving hadoop artifacts for a = repository. 1) Publishing/resolving common/hdfs/mapred artifacts to/from the local file= system. ant ivy-publish-local would publish the jars locally to ${ivy.repo.dir} whi= ch defaults to ${user.home}/ivyrepo ant -Dresolver=3Dlocal would resolve artifacts from the local filesystem wh= ich resolves artifacts from ${user.home}/ivyrepo 2) Publishing artifacts to the people.apache.org=20 ssh resolver is configured which publishes common/hdfs/mapred artifacts to = my home folder /home/gkesavan/ivyrepo=20 Publishing requires authentication whereas, resolving requires passing an a= rgument -Dusername and value for it. The reason I'm using my home folder is that I'm not sure if we can publish = the ivy artifacts to=20 http://people.apache.org/repository or http://people.apache.org/repo/ (used= mostly for maven artifacts) If someone can me tell about using people's repository I can recreate the p= atch to publish ivy artifacts to people server's standard repository. Thanks, Giri > -----Original Message----- > From: Scott Carey [mailto:scott@richrelevance.com] > Sent: Thursday, July 02, 2009 10:32 PM > To: common-dev@hadoop.apache.org > Subject: Re: Developing cross-component patches post-split >=20 >=20 > On 7/1/09 11:58 PM, "Nigel Daley" wrote: >=20 > > > > > > On Jul 1, 2009, at 10:16 PM, Todd Lipcon wrote: > > > >> On Wed, Jul 1, 2009 at 10:10 PM, Raghu Angadi >> inc.com> wrote: > >> > >>> > >>> -1 for committing the jar. > >>> > >>> Most of the various options proposed sound certainly better. > >>> > >>> Can build.xml be updated such that Ivy fetches recent (nightly) > >>> build? > > > > +1. Using ant command line parameters for Ivy, the hdfs and > mapreduce > > builds can depend on the latest Common build from one of: > > a) a local filesystem ivy repo/directory (ie. a developer build of > > Common that is published automatically to local fs ivy directory) > > b) a maven repo (ie. a stable published signed release of Common) > > c) a URL > > >=20 > The standard approach to this problem is the above -- a local file > system > repository, with local developer build output, and a shared repository > with > build-system blessed content. > A developer can choose which to use based on their needs. >=20 > For ease of use, there is always a way to trigger the dependency chain > for a > "full" build. Typically with Java this is a master ant script or a > maven > POM. The build system must either know to build all at once with the > proper > dependency order, or versions are decoupled and dependency changes > happen > only when manually triggered (e.g. Hdfs at revision 9999 uses common > 9000, > and then a check-in pushes hdfs 10000 to use a new common version). > Checking in Jars is usually very frowned upon. Rather, metadata is > checked > in -- the revision number and branch that can create the jar, and the > jar > can be fetched from a repository or built with that metadata. >=20 > AFAICS those are the only two options -- tight coupling, or strict > separation. The latter means that changes to common aren't picked up > by > hdfs or mpareduce until the dependent version is incremented in the > metadata > (harder and more restrictive to devs), and the former means that all > are > essentially the same coupled version (more complicated on the build > system > side but easy for devs). > Developers can span both worlds, but the build system has to pick only > one. >=20 >=20 > > Option c can be a stable URL to that last successful Hudson build and > > is in fact what all the Hudson hdfs and mapreduce builds could be > > configured to use. An example URL would be something like: > > > > http://hudson.zones.apache.org/hudson/job/Hadoop-Common- > trunk/lastSuccessfulBu > > ild/artifact/ > > ... > > > > Giri is creating a patch for this and will respond with more insight > > on how this might work. > > > >> This seems slightly better than actually committing the jars. > >> However, what > >> should we do when the nightly build has failed hudson tests? We seem > >> to > >> sometimes go weeks at a time without a "green" build out of Hudson. > > > > Hudson creates a "lastSuccessfulBuild" link that should be used in > > most cases (see my example above). If Common builds are failing we > > need to respond immediately. Same for other sub-projects. We've got > > to drop this culture that allows failing/flaky unit tests to persist. > > > >>> > >>> HDFS could have a build target that builds common jar from a > >>> specified > >>> source location for common. > >>> > >> > >> This is still my preffered option. Whether it does this with a > >> task > >> or with some kind of or even , I think having the > >> source > >> trees "loosely" tied together for developers is a must. > > > > -1. If folks really want this, then let's revert the project split. > :-o > > > > Nige > > > >