hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alejandro Abdelnur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1121) Recovering running/scheduled jobs after JobTracker failure
Date Fri, 11 May 2007 03:42:15 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12494939
] 

Alejandro Abdelnur commented on HADOOP-1121:
--------------------------------------------

* On Doug's comment on deleting output directory if exists:

At job submission time the non-existence of the output directory is performed. If it exists
the jobs submission fails.

So, if a previously scheduled not-completed job has an output directory it means it never
completed its execution.

Thus deleting the output directory brings the job back to clean submission state (before running).

* On Owen's comments:

On #1, there is not way (at the moment), based on the information being persisted by the JT,
to know if the job has started execution or was pending at the time the JT died (was killed).
That's way the proposal is to automatically resubmit jobs only if the flag of autorecovery
is set to TRUE.

On #2, I'd say: if the input exists the job is relaunched, else it is discarded. About the
output, refer to first comment (to Doug's).

On the "In particular, the framework should not be deleting ..." comment, the autorecovery
is not a default behavior, but a explicitly configured one, thus when submitting a job with
autorecovery on TRUE you are agreeing on output delete if necessary and re-launching if necessary.
 

> Recovering running/scheduled jobs after JobTracker failure
> ----------------------------------------------------------
>
>                 Key: HADOOP-1121
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1121
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: mapred
>         Environment: all
>            Reporter: Alejandro Abdelnur
>
> Currently all running/scheduled jobs are kept in memory in the JobTracker. If the JobTracker
goes down all the running/scheduled jobs have to be resubmitted.
> Proposal:
> (1) On job submission the JobTracker would save the job configuration (job.xml) in a
jobs DFS directory using the jobID as name.
> (2) On job completion (success, failure, klll) it would delete the job configuration
from the jobs DFS directory.
> (3) On JobTracker failure the jobs DFS directory will have all running/scheduled jobs
at failure time.
> (4) On startup the JobTracker would check the jobs DFS directory for job config files.
if there is none it means no failure happened on last stop, there is nothing to be done. If
there are job config files in the jobs DFS directory continue with the following recovery
steps.
> (A) rename all job config files to $JOB_CONFIG_FILE.recover.
> (B) for each $JOB_CONFIG_FILE.recover: delete the output directory if it exists, schedule
the job using the original job ID, delete the $JOB_CONFIG_FILE.recover (as a new $JOB_CONFIG_FILE
will be there per scheduling (per step #1).
> (C) when B is completed start accepting new job submissions.
> Other details:
> A configuration flag would enable/disable the above behavior, if switched off (default
behavior) nothing of the above happens.
> A startup flag could switch off job recovery for systems with the recover set to ON.
> Changes to the job ID generation should be put in place to avoid Job ID collision with
jobs IDs from previous failed runs, for example appending a JT startup timestamp to the job
IDs would do.
> Further improvements on top of this one:
> This mechanism would allow having a JobTracker node in standby to be started in case
of main JobTracker failure. The standby JobTracker would be started on main JobTracker failure.
Making things a little more comprehensive they backup JobTrackers could be running in warm
mode and hearbeats and ping calls among them would activate a warm stand by JobTracker as
new main JobTracker. Together with an enhancement in the JobClient (keeping a list of backup
JobTracker URLs) would enable client fallback to backup JobTrackers.
> State about partially run jobs could be kept, tasks completed/in-progress/pending. This
would enable to recover jobs half way instead restarting them. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message