hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christoph Schmitz <christoph.schm...@1und1.de>
Subject AW: Understanding job completion in other nodes
Date Tue, 26 Jun 2012 09:19:22 GMT
Hi Hamid,

I'm not sure if I understand your question correctly, but I think this is exactly what the
standard workflow in a Hadoop application looks like:

Job job1 = new Job(...);
// setup job, set Mapper and Reducer, etc. 
job1.waitForCompletion(...); // at this point, the cluster will run job 1 and wait for its
completion

// think about the results from job 1, plan job 2 accordingly

Job job2 = new Job(...);
// setup another job
job2.waitForCompletion(...); // at this point, the cluster will run job 2 and wait for its
completion

etc.

So I think "waitForCompletion" is what you were asking about, right?

Regards,
Christoph

-----Urspr√ľngliche Nachricht-----
Von: Hamid Oliaei [mailto:oliaei@gmail.com] 
Gesendet: Dienstag, 26. Juni 2012 11:00
An: mapreduce-user@hadoop.apache.org
Betreff: Understanding job completion in other nodes

Hi,

I want to run a job on all of nodes and if one job was completed, the node must wait until
the jobs on the other nodes finish. 
For that, every node must signal to all nodes and when every node receives the signal from
every one, next job must be run.
How can I handle that in Hadoop?
Is there any solution to understanding job completion in other nodes?
P.S: I am using Hadoop 0.20.2.

Thanks,     


Hamid Oliaei

Oliaei@gmail.com


Mime
View raw message