hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: doubt on Hadoop job submission process
Date Mon, 13 Aug 2012 10:40:08 GMT
Hi Manoj,

Reply inline.

On Mon, Aug 13, 2012 at 3:42 PM, Manoj Babu <manoj444@gmail.com> wrote:
> Hi All,
>
> Normal Hadoop job submission process involves:
>
> Checking the input and output specifications of the job.
> Computing the InputSplits for the job.
> Setup the requisite accounting information for the DistributedCache of the
> job, if necessary.
> Copying the job's jar and configuration to the map-reduce system directory
> on the distributed file-system.
> Submitting the job to the JobTracker and optionally monitoring it's status.
>
> I have a doubt in 4th point of  job execution flow could any of you explain
> it?
>
> What is job's jar?

The job.jar is the jar you supply via "hadoop jar <jar>". Technically
though, it is the jar pointed by JobConf.getJar() (Set via setJar or
setJarByClass calls).

> Is it job's jar is the one we submitted to hadoop or hadoop will build based
> on the job configuration object?

It is the former, as explained above.

-- 
Harsh J

Mime
View raw message