accumulo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric Newton <eric.new...@gmail.com>
Subject Re: Hadoop 2 compatibility issues
Date Thu, 16 May 2013 15:23:19 GMT
I've snuck some necessary changes in... doing integration testing on it
right now.

-Eric



On Wed, May 15, 2013 at 8:03 PM, John Vines <vines@apache.org> wrote:

> I will gladly do it next week, but I'd rather not have it delay the
> release. The question from there is, is doing this type of packaging change
> too large to put in 1.5.1?
>
>
> On Wed, May 15, 2013 at 2:44 PM, Christopher <ctubbsii@apache.org> wrote:
>
> > So, I think that'd be great, if it works, but who is willing to do
> > this work and get it in before I make another RC?
> > I'd like to cut RC3 tomorrow if I have time. So, feel free to patch
> > these in to get it to work before then... or, by the next RC if RC3
> > fails to pass a vote.
> >
> > --
> > Christopher L Tubbs II
> > http://gravatar.com/ctubbsii
> >
> >
> > On Wed, May 15, 2013 at 5:31 PM, Adam Fuchs <afuchs@apache.org> wrote:
> > > It seems like the ideal option would be to have one binary build that
> > > determines Hadoop version and switches appropriately at runtime. Has
> > anyone
> > > attempted to do this yet, and do we have an enumeration of the places
> in
> > > Accumulo code where the incompatibilities show up?
> > >
> > > One of the incompatibilities is in
> org.apache.hadoop.mapreduce.JobContext
> > > switching between an abstract class and an interface. This can be fixed
> > > with something to the effect of:
> > >
> > >   public static Configuration getConfiguration(JobContext context) {
> > >     Impl impl = new Impl();
> > >     Configuration configuration = null;
> > >     try {
> > >       Class c =
> > >
> >
> TestCompatibility.class.getClassLoader().loadClass("org.apache.hadoop.mapreduce.JobContext");
> > >       Method m = c.getMethod("getConfiguration");
> > >       Object o = m.invoke(context, new Object[0]);
> > >       configuration = (Configuration)o;
> > >     } catch (Exception e) {
> > >       throw new RuntimeException(e);
> > >     }
> > >     return configuration;
> > >   }
> > >
> > > Based on a test I just ran, using that getConfiguration method instead
> of
> > > just calling the getConfiguration method on context should avoid the
> one
> > > incompatibility. Maybe with a couple more changes like that we can get
> > down
> > > to one bytecode release for all known Hadoop versions?
> > >
> > > Adam
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message