hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From web service <wbs...@gmail.com>
Subject Re: running hadoop jobs from within a program
Date Sun, 14 Nov 2010 22:40:37 GMT
Thanks, had figured it out. It is fun to figure out how things work :)

On Sun, Nov 14, 2010 at 4:22 AM, Harsh J <qwertymaniac@gmail.com> wrote:

> Hello,
>
> On Fri, Nov 12, 2010 at 10:25 PM, web service <wbsrvc@gmail.com> wrote:
> > Thanks, but submitting three different jobs say using
> >
> > JobClient.submitjob(jobconf1);
> > JobClient.submitjob(jobconf2);
> > JobClient.submitjob(jobconf3)
> >
> > different from running -
> > tmp="$HADOOP_BIN jar $JAR_LOC  $MAIN_CLASS /user/joe/input/input-1/
> > /user/vadmin/output/output-1/
> > tmp="$HADOOP_BIN jar $JAR_LOC  $MAIN_CLASS /user/joe/input/input-2/
> > /user/vadmin/output/output-2/
> > tmp="$HADOOP_BIN jar $JAR_LOC  $MAIN_CLASS /user/joe/input/input-3/
> > /user/vadmin/output/output-3/
>
> It isn't different. In both cases a new JobID is assigned for each job
> created and its specific configuration is associated to it upon
> submission.
>
> >
> > I guess every job can have specific jvm options. and I hope that every
> > submitted job runs in a separate jvm, No ?
>
> Yes, each Task (Map or Reduce, under the Job) runs in a separate JVM
> (although JVMs can be reused using a tweak).
>
> --
> Harsh J
> www.harshj.com
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message