Return-Path: X-Original-To: apmail-hadoop-general-archive@minotaur.apache.org Delivered-To: apmail-hadoop-general-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 66DEE60DE for ; Sat, 30 Jul 2011 00:18:08 +0000 (UTC) Received: (qmail 55498 invoked by uid 500); 30 Jul 2011 00:18:06 -0000 Delivered-To: apmail-hadoop-general-archive@hadoop.apache.org Received: (qmail 55429 invoked by uid 500); 30 Jul 2011 00:18:06 -0000 Mailing-List: contact general-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: general@hadoop.apache.org Delivered-To: mailing list general@hadoop.apache.org Received: (qmail 55420 invoked by uid 99); 30 Jul 2011 00:18:06 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 30 Jul 2011 00:18:06 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of tucu@cloudera.com designates 209.85.161.48 as permitted sender) Received: from [209.85.161.48] (HELO mail-fx0-f48.google.com) (209.85.161.48) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 30 Jul 2011 00:17:59 +0000 Received: by fxg7 with SMTP id 7so4271903fxg.35 for ; Fri, 29 Jul 2011 17:17:38 -0700 (PDT) Received: by 10.204.122.210 with SMTP id m18mr607324bkr.138.1311985058133; Fri, 29 Jul 2011 17:17:38 -0700 (PDT) MIME-Version: 1.0 Received: by 10.205.82.14 with HTTP; Fri, 29 Jul 2011 17:17:07 -0700 (PDT) In-Reply-To: References: <4E32D25D.9000705@apache.org> From: Alejandro Abdelnur Date: Fri, 29 Jul 2011 17:17:07 -0700 Message-ID: Subject: Re: follow up Hadoop mavenization work To: general@hadoop.apache.org Content-Type: multipart/alternative; boundary=0016e6d6419d57386c04a93e538d --0016e6d6419d57386c04a93e538d Content-Type: text/plain; charset=ISO-8859-1 Joep, Ivy & Maven pull JARs from the maven repos you specify. Maven verifies checksums and I assume Ivy does. You could turn your verified ~/.m2 into a Maven proxy and switch fetching JARs not found in the proxy cache. Bottom line, for you concerns Ivy and Maven are equally good or bad. Thanks. Alejandro On Fri, Jul 29, 2011 at 5:09 PM, Rottinghuis, Joep wrote: > Thanks for the replies. > > To elaborate on why I want to build on a server w/o Internet access: > Build should not reach out to Internet and grab jars from unverified > sources w/o md5 hash check etc. > The resulting code will run on a large production cluster with > sensitive/private data. From a compliance and risk perspective I want to be > able to control which jars get pulled in from where. > > Manual verification of ~/.m2, tar.gz and scp to build server is an > acceptable workaround. > Maven proxy simply bypasses the firewalls which were there for good reason. > > Looking forward to try this all on trunk after patch is committed. Until > then I'll work on making this function on 0.22. > > Thanks, > > Joep > > -----Original Message----- > From: Steve Loughran [mailto:stevel@apache.org] > Sent: Friday, July 29, 2011 8:32 AM > To: general@hadoop.apache.org > Subject: Re: follow up Hadoop mavenization work > > On 29/07/11 03:10, Rottinghuis, Joep wrote: > > Alejandro, > > > > Are you trying the use-case when people will want to locally build a > consistent set of common, hdfs, and mapreduce without the downstream > projects depending on published Maven SNAPSHOTS? > > I'm working to get this going on 0.22 right now (see HDFS-843, HDFS-2214, > and I'll have to file two equivalent bugs on mapreduce). > > > > Part of the problem is that the assumption was that people always compile > hdfs against hadoop-common-0.xyz-SNAPSHOT. > > When applying one patch at a time from Jira attachments that may be fine. > > > > If I set up a Jenkins build I will want to make sure that first > hadoop-common builds with a new build number (not snapshot), then hdfs > against that same build number, then mapreduce against hadoop-common and > hdfs. > > Otherwise you can get a situation when the mapreduce build is still > running and hadoop-common build has already produced a new snapshot build. > > > > Local caching in ~/.m2 and ~/.ivy2 repos makes this situation even more > complex. > > One option here is to set up >1 virtual machine (The centos 6.0 minimal are > pretty lightweight) and delegate work to these jenkins instances, forcing > different branches into different virtual hosts, and jenkins to build stuff > serially on a single machine. That ensures a strict order and isolates you. > You can even have ant targets to purge the repository caches. > > I have some Centos VMs set up to do release work on my desktop as it > ensures that I never release under-development code; the functional test > runs don't interfere with my desktop test runs, and I can keep editing the > code. It works OK, if you have enough RAM and HDD to spare > > -steve > > --0016e6d6419d57386c04a93e538d--