hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Runping Qi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1876) Persisting completed jobs status
Date Wed, 09 Jan 2008 15:27:34 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12557292#action_12557292
] 

Runping Qi commented on HADOOP-1876:
------------------------------------


I am fine with the approach of this patch if it turns out to be simpler than using JobHistory.

Can this patch make JobHistory log obsolete? Or at least is that intended?
I hate to see same information logged at different places 
in different forms using different code paths.

Other than being in text format (which has its pros and cons), job history log is event based,

from which you can construct the whole execution history of a job and derive various time-series
data
such as number of mappers/reducers of different states (waiting, running, sorting, shuffling,
completed, etc) 
This kind of information if important for understanding the runtime behavior of the job.

If RunningJob can easily accomodate those kinds of time series data, then I am OK to obsolete
job history log.

Also, have folks considered the relationship between this and HADOOP-2178?



> Persisting completed jobs status
> --------------------------------
>
>                 Key: HADOOP-1876
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1876
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: mapred
>         Environment: all
>            Reporter: Alejandro Abdelnur
>            Assignee: Alejandro Abdelnur
>            Priority: Critical
>             Fix For: 0.16.0
>
>         Attachments: patch1876.txt, patch1876.txt
>
>
> Currently the JobTracker keeps information about completed jobs in memory. 
> This information is  flushed from the cache when it has outlived (#RETIRE_JOB_INTERVAL)
or because the limit of completed jobs in memory has been reach (#MAX_COMPLETE_USER_JOBS_IN_MEMORY).

> Also, if the JobTracker is restarted (due to being recycled or due to a crash) information
about completed jobs is lost.
> If any of the above scenarios happens before the job information is queried by a hadoop
client (normally the job submitter or a monitoring component) there is no way to obtain such
information.
> A way to avoid this is the JobTracker to persist in DFS the completed jobs information
upon job completion. This would be done at the time the job is moved to the completed jobs
queue. Then when querying the JobTracker for information about a completed job, if it is not
found in the memory queue, a lookup  in DFS would be done to retrieve the completed job information.

> A directory in DFS (under mapred/system) would be used to persist completed job information,
for each completed job there would be a directory with the job ID, within that directory all
the information about the job: status, jobprofile, counters and completion events.
> A configuration property will indicate for how log persisted job information should be
kept in DFS. After such period it will be cleaned up automatically.
> This improvement would not introduce API changes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message