Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 43EBC200C8E for ; Thu, 25 May 2017 06:41:11 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 425CF160BD7; Thu, 25 May 2017 04:41:11 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 89845160BB6 for ; Thu, 25 May 2017 06:41:10 +0200 (CEST) Received: (qmail 84471 invoked by uid 500); 25 May 2017 04:41:09 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 84455 invoked by uid 99); 25 May 2017 04:41:09 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 25 May 2017 04:41:09 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id E3DB9188A89 for ; Thu, 25 May 2017 04:41:08 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id floAhG1y424H for ; Thu, 25 May 2017 04:41:07 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id EC7FD5F306 for ; Thu, 25 May 2017 04:41:06 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id E49C7E0A64 for ; Thu, 25 May 2017 04:41:05 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id EC15621B56 for ; Thu, 25 May 2017 04:41:04 +0000 (UTC) Date: Thu, 25 May 2017 04:41:04 +0000 (UTC) From: "Allen Wittenauer (JIRA)" To: common-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HADOOP-14453) Split the maven modules into several profiles MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 25 May 2017 04:41:11 -0000 [ https://issues.apache.org/jira/browse/HADOOP-14453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16024180#comment-16024180 ] Allen Wittenauer commented on HADOOP-14453: ------------------------------------------- bq. Why once-per-day? Because that's all I need. My general pattern is: 1. Do a pull and merge in changes from trunk overnight 2. Do the mvn install to get local changes down into the appropriate repo while in another window I look over what actually changed. 3. Review, write a patch, or do whatever, usually in a fresh rebase from trunk into my test branch 4. Afterward, switch back to trunk and re-build just the jars that were touched by that patch 5. go to step 3 I almost always work exclusively on trunk. On the rare occasions I work in other branches that are cut off of trunk (can't remember when I last touched branch-2), I use a different maven repo via MAVEN\_OPTS so there are no jar collisions. Probably worth noting that dev-support/bin/test-patch and dev-support/bin/qbt with the appropriate flags automates a lot of this. bq. I have 5 git clones in my machine. Which begs the question: why don't people have 5 maven repos to match? maven's shared repo model doesn't really work well with same project/multiple repo dev patterns because there is nothing to prevent race conditions on it. Ran two 'mvn installs'? Woops. I wonder which jar those unit tests are running... A lot of work went into Yetus to specifically work around that issue. Thus why we haven't seen any spurious "unknown class" or related issues in testing in over a year now. They were directly and completely related to having one maven repo for multiple, simultaneous patch runs. The community can do what they want with this JIRA (assuming it doesn't break anything). I'm just pointing out an alternative that has almost always worked for me, regardless of the project. > Split the maven modules into several profiles > --------------------------------------------- > > Key: HADOOP-14453 > URL: https://issues.apache.org/jira/browse/HADOOP-14453 > Project: Hadoop Common > Issue Type: Improvement > Components: build > Reporter: Tsz Wo Nicholas Sze > Assignee: Tsz Wo Nicholas Sze > Attachments: c14453_20170524.patch > > > Current all the modules are defined at directly under . As a result, we cannot select to build only some of the modules. We have to build all the modules in any cases and, unfortunately, it takes a long time. > We propose split all the modules into multiple profiles so that we could build some of the modules by disabling some of the profiles. All the profiles are enabled by default so that all the modules will be built by default. > For example, when we are making change in common. We could build and run tests under common by disabling hdfs, yarn, mapreduce, etc. modules. This will reduce the development time spend on compiling unrelated modules. > Note that this is for local maven builds. We are not proposing to change Jenkins builds, which always build all the modules. -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: common-issues-help@hadoop.apache.org