Return-Path: Delivered-To: apmail-hadoop-mapreduce-dev-archive@minotaur.apache.org Received: (qmail 50205 invoked from network); 8 Jun 2010 23:09:27 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 8 Jun 2010 23:09:27 -0000 Received: (qmail 94912 invoked by uid 500); 8 Jun 2010 23:09:27 -0000 Delivered-To: apmail-hadoop-mapreduce-dev-archive@hadoop.apache.org Received: (qmail 94874 invoked by uid 500); 8 Jun 2010 23:09:27 -0000 Mailing-List: contact mapreduce-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-dev@hadoop.apache.org Delivered-To: mailing list mapreduce-dev@hadoop.apache.org Received: (qmail 94866 invoked by uid 99); 8 Jun 2010 23:09:27 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Jun 2010 23:09:27 +0000 X-ASF-Spam-Status: No, hits=0.7 required=10.0 tests=RCVD_IN_DNSWL_NONE,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [209.85.219.215] (HELO mail-ew0-f215.google.com) (209.85.219.215) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Jun 2010 23:09:19 +0000 Received: by ewy7 with SMTP id 7so1169550ewy.31 for ; Tue, 08 Jun 2010 16:08:58 -0700 (PDT) MIME-Version: 1.0 Received: by 10.213.109.6 with SMTP id h6mr914419ebp.14.1276038538757; Tue, 08 Jun 2010 16:08:58 -0700 (PDT) Received: by 10.213.9.129 with HTTP; Tue, 8 Jun 2010 16:08:58 -0700 (PDT) In-Reply-To: References: Date: Wed, 9 Jun 2010 01:08:58 +0200 Message-ID: Subject: Re: building 0.21 From: Torsten Curdt To: mapreduce-dev@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 X-Virus-Checked: Checked by ClamAV on apache.org Finally got it working! The problem was that I thought I also needed a core jar. So I had 4 instead of 3 jars.That resulted in a clash. Plus it picked the wrong resolver. That was kind of painful. Shouldn't the build be just like this? common> ant clean install hdfs> ant clean install mapreduce> ant clean install I think the modularity makes total sense, but to me this looks actually more like a multi-module build. So a single mapreduce> ant clean install should also build the other two. WDYT? cheers -- Torsten