Return-Path: X-Original-To: apmail-hdt-dev-archive@minotaur.apache.org Delivered-To: apmail-hdt-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id DB947EBC3 for ; Sat, 16 Feb 2013 03:55:06 +0000 (UTC) Received: (qmail 6632 invoked by uid 500); 16 Feb 2013 03:55:05 -0000 Delivered-To: apmail-hdt-dev-archive@hdt.apache.org Received: (qmail 6544 invoked by uid 500); 16 Feb 2013 03:55:02 -0000 Mailing-List: contact dev-help@hdt.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hdt.incubator.apache.org Delivered-To: mailing list dev@hdt.incubator.apache.org Received: (qmail 6406 invoked by uid 99); 16 Feb 2013 03:54:57 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 16 Feb 2013 03:54:57 +0000 X-ASF-Spam-Status: No, hits=-1997.8 required=5.0 tests=ALL_TRUSTED,HTML_MESSAGE,RP_MATCHES_RCVD X-Spam-Check-By: apache.org Received: from [140.211.11.3] (HELO mail.apache.org) (140.211.11.3) by apache.org (qpsmtpd/0.29) with SMTP; Sat, 16 Feb 2013 03:54:54 +0000 Received: (qmail 5879 invoked by uid 99); 16 Feb 2013 03:54:31 -0000 Received: from minotaur.apache.org (HELO minotaur.apache.org) (140.211.11.9) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 16 Feb 2013 03:54:31 +0000 Received: from localhost (HELO mail-ea0-f172.google.com) (127.0.0.1) (smtp-auth username adamb, mechanism plain) by minotaur.apache.org (qpsmtpd/0.29) with ESMTP; Sat, 16 Feb 2013 03:54:30 +0000 Received: by mail-ea0-f172.google.com with SMTP id f13so1637359eaa.3 for ; Fri, 15 Feb 2013 19:54:28 -0800 (PST) MIME-Version: 1.0 X-Received: by 10.14.5.6 with SMTP id 6mr15983136eek.42.1360986868312; Fri, 15 Feb 2013 19:54:28 -0800 (PST) Received: by 10.14.96.79 with HTTP; Fri, 15 Feb 2013 19:54:27 -0800 (PST) In-Reply-To: References: Date: Fri, 15 Feb 2013 21:54:27 -0600 Message-ID: Subject: Re: initial plugin split done From: Adam Berry To: dev@hdt.incubator.apache.org Content-Type: multipart/alternative; boundary=047d7b67098fd46b2704d5cf7273 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b67098fd46b2704d5cf7273 Content-Type: text/plain; charset=ISO-8859-1 On Mon, Feb 11, 2013 at 10:49 PM, Mattmann, Chris A (388J) < chris.a.mattmann@jpl.nasa.gov> wrote: > Hey Adam, > > Great work! Do you think it's now time for a first release? Even if it's > not fully functional, and even if it doesn't support everything you > mention in paragraph #2 below, it will be a great incremental milestone. > > Thoughts? > > Cheers, > Chris > > > On 2/11/13 8:29 PM, "Adam Berry" wrote: > > >Hey guys, > > > >So first, let me say thanks for the patience as I worked on this. > > > >I've split up the original single plugin into a few logical units as we > >discussed before. I've thrown up the beginnings of a wiki page, > >http://wiki.apache.org/hdt/HDTGettingStarted with the beginnings of how > to > >grab this and work with it. The maven/tycho build support still needs to > >go > >in, but I should be able to get to that this week. > > > >So now we are ready to start attacking multi hadoop version support! We > >need multi version clients for launching jobs on hadoop clusters, also for > >interacting with hdfs on the same clusters, those will need the connectors > >that we discussed before. The other spot is in the jars that get that > >added > >to the classpath for MapReduce projects. > > > >Although the plugins are logically split, some of the classes in them need > >some more work to better split the work between core and ui, so keeping > >refactoring in mind as would be good I think. For now, the Hadoop imports > >are satisfied using the org.apache.hadoop.eclipse plugin, which bundles > >Hadoop 1.0.4. > > > >I've added some JIRAs as trackers for this feature work, so feel free to > >jump in to the source and chime in! > > > >Cheers, > >Adam > > Hey guys, sorry for the delay in responding, I was struck down by the flu. So I'm really not sure here, so comments and thoughts are more than welcome. We can probably make the current set of tools work with 1.0 without too much trouble, but we would also need some tests and documentation before release, not necessarily exhaustive, but at least something. Pursuing a release quickly would likely help to drive interest and momentum for the tools. I think I'm leaning in favor of doing that, gets us used to doing apache releases, and the other pieces of infrastructure, and would let us get visible within the hadoop community sooner. I believe its still important to start work on the multi version support as soon as possible, but I think we can do that in parallel to the release of some tools that support the 1.0 line. So let me know what you guys think, and we'll go from there. Cheers, Adam --047d7b67098fd46b2704d5cf7273--