hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Kerzner <markkerz...@gmail.com>
Subject Re: Including Additional Jars
Date Mon, 04 Apr 2011 16:40:15 GMT
Then it seems you want to do the opposite of what I have done in this
script. I AM combining all the jars in one jar, and you already have that.

Rather, you want to distribute only your app jar, and put the other ones in
the lib folder on the server.

I know that when you run a standard MR job, you only need to mention your
jar, and the other Hadoop jars already come from the lib. In other words,
you should be able to run it like this:

hadoop jar your-jar parameters

Since you are using Cloudera distro, this runs the following


which in turn runs this script

export HADOOP_HOME=/usr/lib/hadoop-0.20
exec /usr/lib/hadoop-0.20/bin/hadoop "$@"

Since HADOOP_HOME is set, it knows that the libraries are in here


therefore, I think that if you put your additional libraries in the same
folder, it should just pick them up.


On Mon, Apr 4, 2011 at 11:31 AM, Shuja Rehman <shujamughal@gmail.com> wrote:

> hi,
> i do not understand it. can u take my explain it with my example?
> I have following jars in lib folder of dist created by netbeans
> (dist/lib/).
> commons-logging-1.1.1.jar
> guava-r07.jar
> hadoop-0.20.2+737-core.jar
> hbase.jar
> hbase-0.89.20100924+28.jar
> log4j-1.2.15.jar
> mysql-connector-java-5.1.7-bin.jar
> UIDataTransporter.jar
> zookeeper.jar
> and dist folder contains only
> MyProgram.jar
> at the moment, i am combining all jars files to produce the single file.
> but now i want to just put the dist/lib/ *.jars for once on server and only
> MyProgram.jar should be copied everytime i change the code.
> so can u transfer ur code according to my example???
> Thanks
> On Mon, Apr 4, 2011 at 8:17 PM, Mark Kerzner <markkerzner@gmail.com>wrote:
>> Shuja,
>> here is what I do in NB environment
>> #!/bin/sh
>> cd ../dist
>> jar -xf Chapter1.jar
>> jar -cmf META-INF/MANIFEST.MF  ../Chapter3-for-Hadoop.jar *
>> cd ../bin
>> echo "Repackaged for Hadoop"
>> and it does the job. I run it only when I want to build this jar.
>> Mark
>> On Mon, Apr 4, 2011 at 10:06 AM, Shuja Rehman <shujamughal@gmail.com>wrote:
>>> Hi All
>>> I have created a map reduce job and to run on it on the cluster, i have
>>> bundled all jars(hadoop, hbase etc) into single jar which increases the
>>> size
>>> of overall file. During the development process, i need to copy again and
>>> again this complete file which is very time consuming so is there any way
>>> that i just copy the program jar only and do not need to copy the lib
>>> files
>>> again and again. i am using net beans to develop the program.
>>> kindly let me know how to solve this issue?
>>> Thanks
>>> --
>>> Regards
>>> Shuja-ur-Rehman Baig
>>> <http://pk.linkedin.com/in/shujamughal>
> --
> Regards
> Shuja-ur-Rehman Baig
> <http://pk.linkedin.com/in/shujamughal>

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message