hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Compiling a mapreduce job under the new Apache Hadoop YARN
Date Sat, 11 Aug 2012 15:54:37 GMT
Hi Pantazis,

It is better to use maven or other such tools to develop MR Java
programs, as that handles dependencies for you. In maven you may use
the hadoop-client POM.

Alternatively, if you have a hadoop setup available, do this:

$ javac -cp `hadoop classpath`:. -d class_dir Example.java

The command "hadoop classpath" will add all requisite jars from the
installed version to the java(c) classpath automatically.

On Sat, Aug 11, 2012 at 7:25 AM, Pantazis Deligiannis
<akis1986@gmail.com> wrote:
> In the old Hadoop mapreduce v1 framework (e.g. 1.0.3) the way to compile a job was using
the already provided "hadoop-core-HADOOP_VERSION.jar" through the following command:
>
>     javac -cp ".../hadoop-core-HADOOP_VERSION.jar" -d class_dir Example.java
>
> In the newest releases for Hadoop mapreduce v2 YARN (e.g. 2.0.1-alpha or the cloudera
4.0) a similar core.jar is not provided, though, to be used during compilation. The closest
i have managed to get is by using the "hadoop-common-2.0.1-alpha.jar" in the /share/hadoop/common
dir but this still leaves a lot of compilation errors.
>
> Is there some other specific way or using different jars for compiling a YARN job? Or
maybe for some reason you need to build the source yourself in the new releases to get the
core.jar? I have searched quite a lot but can't find a relevant answer.
>
> Thank you very much!



-- 
Harsh J

Mime
View raw message