hdt-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adam Berry <ad...@apache.org>
Subject Re: initial plugin split done
Date Sat, 16 Feb 2013 03:54:27 GMT
On Mon, Feb 11, 2013 at 10:49 PM, Mattmann, Chris A (388J) <
chris.a.mattmann@jpl.nasa.gov> wrote:

> Hey Adam,
>
> Great work! Do you think it's now time for a first release? Even if it's
> not fully functional, and even if it doesn't support everything you
> mention in paragraph #2 below, it will be a great incremental milestone.
>
> Thoughts?
>
> Cheers,
> Chris
>
>
> On 2/11/13 8:29 PM, "Adam Berry" <adamb@apache.org> wrote:
>
> >Hey guys,
> >
> >So first, let me say thanks for the patience as I worked on this.
> >
> >I've split up the original single plugin into a few logical units as we
> >discussed before. I've thrown up the beginnings of a wiki page,
> >http://wiki.apache.org/hdt/HDTGettingStarted with the beginnings of how
> to
> >grab this and work with it. The maven/tycho build support still needs to
> >go
> >in, but I should be able to get to that this week.
> >
> >So now we are ready to start attacking multi hadoop version support! We
> >need multi version clients for launching jobs on hadoop clusters, also for
> >interacting with hdfs on the same clusters, those will need the connectors
> >that we discussed before. The other spot is in the jars that get that
> >added
> >to the classpath for MapReduce projects.
> >
> >Although the plugins are logically split, some of the classes in them need
> >some more work to better split the work between core and ui, so keeping
> >refactoring in mind as would be good I think. For now, the Hadoop imports
> >are satisfied using the org.apache.hadoop.eclipse plugin, which bundles
> >Hadoop 1.0.4.
> >
> >I've added some JIRAs as trackers for this feature work, so feel free to
> >jump in to the source and chime in!
> >
> >Cheers,
> >Adam
>
> Hey guys,

sorry for the delay in responding, I was struck down by the flu.

So I'm really not sure here, so comments and thoughts are more than welcome.

We can probably make the current set of tools work with 1.0 without too
much trouble, but we would also need some tests and documentation before
release, not necessarily exhaustive, but at least something. Pursuing a
release quickly would likely help to drive interest and momentum for the
tools.

I think I'm leaning in favor of doing that, gets us used to doing apache
releases, and the other pieces of infrastructure, and would let us get
visible within the hadoop community sooner.

I believe its still important to start work on the multi version support as
soon as possible, but I think we can do that in parallel to the release of
some tools that support the 1.0 line.

So let me know what you guys think, and we'll go from there.

Cheers,
Adam

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message