hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: follow up Hadoop mavenization work
Date Fri, 29 Jul 2011 15:31:41 GMT
On 29/07/11 03:10, Rottinghuis, Joep wrote:
> Alejandro,
>
> Are you trying the use-case when people will want to locally build a consistent set of
common, hdfs, and mapreduce without the downstream projects depending on published Maven SNAPSHOTS?
> I'm working to get this going on 0.22 right now (see HDFS-843, HDFS-2214, and I'll have
to file two equivalent bugs on mapreduce).
>
> Part of the problem is that the assumption was that people always compile hdfs against
hadoop-common-0.xyz-SNAPSHOT.
> When applying one patch at a time from Jira attachments that may be fine.
>
> If I set up a Jenkins build I will want to make sure that first hadoop-common builds
with a new build number (not snapshot), then hdfs against that same build number, then mapreduce
against hadoop-common and hdfs.
> Otherwise you can get a situation when the mapreduce build is still running and hadoop-common
build has already produced a new snapshot build.
>
> Local caching in ~/.m2 and ~/.ivy2 repos makes this situation even more complex.

One option here is to set up >1 virtual machine (The centos 6.0 minimal 
are pretty lightweight) and delegate work to these jenkins instances, 
forcing different branches into different virtual hosts, and jenkins to 
build stuff serially on a single machine. That ensures a strict order 
and isolates you. You can even have ant targets to purge the repository 
caches.

I have some Centos VMs set up to do release work on my desktop as it 
ensures that I never release under-development code; the functional test 
runs don't interfere with my desktop test runs, and I can keep editing 
the code. It works OK, if you have enough RAM and HDD to spare

-steve


Mime
View raw message