I am not sure the original suggestion will work for your case.

My understanding is the you want to use some API, only exists in slf4j versiobn 1.6.4, but this library with different version already existed in your hadoop environment, which is quite possible.

To change the maven build of the application maybe not work due to:

1) If some API of 1.6.4 being used in the application code, then it must be shipped with your code into hadoop cluster.
2) What you are looking for maybe is this parameter "mapreducer.user.classpath.first", you can google it, which allow the user's class files loaded before the Hadoop's in mapper/reducer tasks.
3) Keep in mind that even your code should be fine now, but if the library  jar you submit MAYBE not backward compatibility, then it may cause problem in the Mapper/Reducer tasks for Hadoop code, as now the slf4j 1.6.4 version of class code being loaded into JVM. But this case is very rare, as a lot of you time you will submit a later version of jar than what being contained in hadoop, and most of them are backward compatibility.
4) Maybe Hadoop can use different classloader to load the user's jar file, like OLD J2EE container did, to solve this kind of problem in the future.


Date: Tue, 27 Aug 2013 09:05:51 -0700
Subject: Re: Jar issue
From: jamalshasha@gmail.com
To: user@hadoop.apache.org

I am right now using libjars option.
How do i do what you suggested using that route?

On Tue, Aug 27, 2013 at 8:51 AM, Shahab Yunus <shahab.yunus@gmail.com> wrote:
One idea is, you can use the exclusion property of maven (provided you are using that to build your application) while including hadoop dependencies and exclude sl4j that is coming within hadoop and then include your own sl4j as a separate dependency. Something like this: