accumulo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher <ctubb...@apache.org>
Subject Re: Hadoop 2 compatibility issues
Date Wed, 15 May 2013 21:44:07 GMT
So, I think that'd be great, if it works, but who is willing to do
this work and get it in before I make another RC?
I'd like to cut RC3 tomorrow if I have time. So, feel free to patch
these in to get it to work before then... or, by the next RC if RC3
fails to pass a vote.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii


On Wed, May 15, 2013 at 5:31 PM, Adam Fuchs <afuchs@apache.org> wrote:
> It seems like the ideal option would be to have one binary build that
> determines Hadoop version and switches appropriately at runtime. Has anyone
> attempted to do this yet, and do we have an enumeration of the places in
> Accumulo code where the incompatibilities show up?
>
> One of the incompatibilities is in org.apache.hadoop.mapreduce.JobContext
> switching between an abstract class and an interface. This can be fixed
> with something to the effect of:
>
>   public static Configuration getConfiguration(JobContext context) {
>     Impl impl = new Impl();
>     Configuration configuration = null;
>     try {
>       Class c =
> TestCompatibility.class.getClassLoader().loadClass("org.apache.hadoop.mapreduce.JobContext");
>       Method m = c.getMethod("getConfiguration");
>       Object o = m.invoke(context, new Object[0]);
>       configuration = (Configuration)o;
>     } catch (Exception e) {
>       throw new RuntimeException(e);
>     }
>     return configuration;
>   }
>
> Based on a test I just ran, using that getConfiguration method instead of
> just calling the getConfiguration method on context should avoid the one
> incompatibility. Maybe with a couple more changes like that we can get down
> to one bytecode release for all known Hadoop versions?
>
> Adam

Mime
View raw message