hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Purtell <apurt...@apache.org>
Subject Re: Publishing jars for hbase compiled against hadoop 0.23.x/hadoop 2.0.x
Date Wed, 16 May 2012 23:06:41 GMT
[cc bigtop-dev]

On Wed, May 16, 2012 at 3:22 PM, Jesse Yates <jesse.k.yates@gmail.com> wrote:
>  +1 on a small number of supported versions with different classifiers that
> only span a limited api skew to avoid a mountain of reflection. Along with
> that, support for the builds via jenkins testing.

and

>> I think HBase should consider having a single blessed set of
>> dependencies and only one build for a given release,
>
> This would be really nice, but seems a bit unreasonable given that we are
> the "hadoop database" (if not in name, at least by connotation). I think
> limiting our support to the latest X versions (2-3?) is reasonable given
> consistent APIs

I was talking release mechanics not source/compilation/testing level
support. Hence the suggestion for multiple Jenkins projects for the
dependency versions we care about. That care could be scoped like you
suggest.

I like what Bigtop espouses: carefully constructed snapshots of the
world, well tested in total. Seems easier to manage then laying out
various planes from increasingly higher dimensional spaces. If they
get traction we can act as a responsible upstream project. As for our
official release, we'd have maybe two, I'll grant you that, Hadoop 1
and Hadoop 2.

X=2 will be a challenge. It's not just the Hadoop version that could
change, but the versions of all of its dependencies, SLF4J, Guava,
JUnit, protobuf, etc. etc. etc.; and that could happen at any time on
point releases. If we are supporting the whole series of 1.x and 2.x
releases, then that could be a real pain. Guava is a good example, it
was a bit painful for us to move from 9 to 11 but not so for core as
far as I know.

 - we should be very careful in picking which new versions
> we support and when. A lot of the pain with the hadoop distributions has
> been then wildly shifting apis, making a lot of work painful for handling
> different versions (distcp/backup situations come to mind here, among other
> things.

We also have test dependencies on interfaces that are LimitedPrivate
at best. It's a source of friction.

> +1 on the idea of having classifiers for the different versions we actually
> release as proper artifacts, and should be completely reasonable to enable
> via profiles. I'd have to double check as to _how_ people would specify
> that classifier/version of hbase from the maven repo, but it seems entirely
> possible (my worry here is about the collison with the -tests and -sources
> classifiers, which are standard mvn conventions for different builds).
> Otherwise, with maven it is very reasonable to have people hosting profiles
> for versions that they want to support - generally, this means just another
> settings.xml file that includes another profile that people can activate on
> their own, when they want to build against their own version.

This was a question I had, maybe you know. What happens if you want to
build something like <artifact>-<version>-<classifier>-tests or
-source? Would that work? Otherwise we'd have to add a suffix using
property substitutions in profiles, right?

Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet
Hein (via Tom White)

Mime
View raw message