hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Albert Chern" <albert.ch...@gmail.com>
Subject Re: JAR packaging
Date Sat, 28 Oct 2006 20:13:35 GMT
I'm not sure if the first option works.  If it does let me know.  One of the
developers taught me to use option 2 by creating a jar with your
dependencies in lib/.  The tasktrackers will automatically include
everything in lib/ on their classpaths.

On 10/28/06, Grant Ingersoll <gsingers@apache.org> wrote:
>
> I'm not sure I am understanding this correctly and I don't see
> anything on this in the Getting Started section, so...
>
> It seems that when I want to run my application in distributed mode,
> I should invoke the <hadoop_home>/bin/hadoop jar <jar> (or bin/hadoop
> <main-class>) and it will copy my JAR onto the DFS and then
> distribute the other nodes in the cluster can access it and run it.
>
> Classpath wise, there seems to be two options:
>
> 1. Have all the appropriate dependencies available so they are read
> in by the start up commands and included in the classpath.  Does this
> means they all need to be on each node at startup time?
>
> 2. Create a single JAR made up of the contents of all the dependencies
>
> Also, the paths must be exactly the same on all the nodes, right?
>
> Is this correct or am I missing something?
>
> Thanks,
> Grant
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message