hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <qwertyman...@gmail.com>
Subject Re: running hadoop jobs from within a program
Date Sun, 14 Nov 2010 12:22:49 GMT
Hello,

On Fri, Nov 12, 2010 at 10:25 PM, web service <wbsrvc@gmail.com> wrote:
> Thanks, but submitting three different jobs say using
>
> JobClient.submitjob(jobconf1);
> JobClient.submitjob(jobconf2);
> JobClient.submitjob(jobconf3)
>
> different from running -
> tmp="$HADOOP_BIN jar $JAR_LOC  $MAIN_CLASS /user/joe/input/input-1/
> /user/vadmin/output/output-1/
> tmp="$HADOOP_BIN jar $JAR_LOC  $MAIN_CLASS /user/joe/input/input-2/
> /user/vadmin/output/output-2/
> tmp="$HADOOP_BIN jar $JAR_LOC  $MAIN_CLASS /user/joe/input/input-3/
> /user/vadmin/output/output-3/

It isn't different. In both cases a new JobID is assigned for each job
created and its specific configuration is associated to it upon
submission.

>
> I guess every job can have specific jvm options. and I hope that every
> submitted job runs in a separate jvm, No ?

Yes, each Task (Map or Reduce, under the Job) runs in a separate JVM
(although JVMs can be reused using a tweak).

-- 
Harsh J
www.harshj.com

Mime
View raw message